首页 > 最新文献

ACM SIGGRAPH 2018 Emerging Technologies最新文献

英文 中文
CHICAP
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214924
Yong-Ho Lee, Mincheol Kim, Hwang-Youn Kim, Dongmyoung Lee, Bum-Jae You
In the research, we propose a cost-effective 3-finger exoskeleton hand motion-capturing device and a physics engine-based hand interaction module for immersive experience in manipulation of virtual objects. The developed device provides 12 DOFs data of finger motion by a unique bevel-gear structure as well as the use of six 3D magnetic sensors. It shows a small error in relative distance between two fingertips less than 2 mm and allows the user to reproduce precise hand motion while processing the complex joint data in real-time. We synchronize hand motion with a physics engine-based interaction framework that includes a grasp interpreter and multi-modal feedback operation in virtual reality to minimize penetration of a hand into an object. The system enables feasibility of object manipulation as far as the needs go in various tasks in virtual environment.
{"title":"CHICAP","authors":"Yong-Ho Lee, Mincheol Kim, Hwang-Youn Kim, Dongmyoung Lee, Bum-Jae You","doi":"10.1145/3214907.3214924","DOIUrl":"https://doi.org/10.1145/3214907.3214924","url":null,"abstract":"In the research, we propose a cost-effective 3-finger exoskeleton hand motion-capturing device and a physics engine-based hand interaction module for immersive experience in manipulation of virtual objects. The developed device provides 12 DOFs data of finger motion by a unique bevel-gear structure as well as the use of six 3D magnetic sensors. It shows a small error in relative distance between two fingertips less than 2 mm and allows the user to reproduce precise hand motion while processing the complex joint data in real-time. We synchronize hand motion with a physics engine-based interaction framework that includes a grasp interpreter and multi-modal feedback operation in virtual reality to minimize penetration of a hand into an object. The system enables feasibility of object manipulation as far as the needs go in various tasks in virtual environment.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128806868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Headlight: egocentric visual augmentation by wearable wide projector 头灯:以自我为中心的视觉增强,可穿戴式宽投影仪
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214926
Shunichi Kasahara
Visual augmentation to the real environment has potential not only to display information but also to provide a new perception of the physical world. However, the currently available mixed reality technologies could not provide enough angle of view. Thus, we introduce "Headlight", a wearable projector system that provides wide egocentric visual augmentation. Our system consists of a small laser projector with a fish-eye wider conversion lens, a headphone and a pose tracker. HeadLight provides projection angle with approx. 105 deg. horizontal and 55 deg. vertical from the point of view of the user. In this system, the three-dimensional virtual space that is consistent with the physical environment is rendered with a virtual camera based on tracking information of the device. By processing inverse correction of the lens distortion and projecting the rendered image from the projector, HeadLight performs consistent visual augmentation in the real world. With Headlight, we envision that physical phenomena that human could not perceive will be perceived through visual augmentation.
对真实环境的视觉增强不仅可以显示信息,还可以提供对物理世界的新感知。然而,现有的混合现实技术无法提供足够的视角。因此,我们推出了“Headlight”,一种可穿戴投影仪系统,提供广泛的以自我为中心的视觉增强。我们的系统由一个带有鱼眼宽转换镜头的小型激光投影仪、一个耳机和一个姿势跟踪器组成。头灯提供投影角度约。从使用者的角度看,水平105度,垂直55度。在该系统中,基于设备的跟踪信息,通过虚拟摄像机渲染出与物理环境相一致的三维虚拟空间。通过处理透镜畸变的反向校正和投影投影仪的渲染图像,HeadLight在现实世界中执行一致的视觉增强。通过Headlight,我们设想人类无法感知的物理现象将通过视觉增强而被感知。
{"title":"Headlight: egocentric visual augmentation by wearable wide projector","authors":"Shunichi Kasahara","doi":"10.1145/3214907.3214926","DOIUrl":"https://doi.org/10.1145/3214907.3214926","url":null,"abstract":"Visual augmentation to the real environment has potential not only to display information but also to provide a new perception of the physical world. However, the currently available mixed reality technologies could not provide enough angle of view. Thus, we introduce \"Headlight\", a wearable projector system that provides wide egocentric visual augmentation. Our system consists of a small laser projector with a fish-eye wider conversion lens, a headphone and a pose tracker. HeadLight provides projection angle with approx. 105 deg. horizontal and 55 deg. vertical from the point of view of the user. In this system, the three-dimensional virtual space that is consistent with the physical environment is rendered with a virtual camera based on tracking information of the device. By processing inverse correction of the lens distortion and projecting the rendered image from the projector, HeadLight performs consistent visual augmentation in the real world. With Headlight, we envision that physical phenomena that human could not perceive will be perceived through visual augmentation.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128072701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Leviopole
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214913
Tomoya Sasaki, R. S. Hartanto, Kao-Hua Liu, Keitaro Tsuchiya, Atsushi Hiyama, Masahiko Inami
We present LevioPole, a rod-like device that provides mid-air haptic feedback for full-body interaction in virtual reality, augmented reality, or other daily activities. The device is constructed from two rotor units, which are designed using propellers, motors, speed controllers, batteries, and sensors, allowing portability and ease of use. Having each group of rotor units on both ends of the pole, these rotors generate both rotational and linear forces that can be driven according to the target application. In this paper, we introduce example applications in both VR and physical environment; embodied gaming with haptic feedback and walking navigation in a specific direction.
{"title":"Leviopole","authors":"Tomoya Sasaki, R. S. Hartanto, Kao-Hua Liu, Keitaro Tsuchiya, Atsushi Hiyama, Masahiko Inami","doi":"10.1145/3214907.3214913","DOIUrl":"https://doi.org/10.1145/3214907.3214913","url":null,"abstract":"We present LevioPole, a rod-like device that provides mid-air haptic feedback for full-body interaction in virtual reality, augmented reality, or other daily activities. The device is constructed from two rotor units, which are designed using propellers, motors, speed controllers, batteries, and sensors, allowing portability and ease of use. Having each group of rotor units on both ends of the pole, these rotors generate both rotational and linear forces that can be driven according to the target application. In this paper, we introduce example applications in both VR and physical environment; embodied gaming with haptic feedback and walking navigation in a specific direction.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121019841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Real-time non-line-of-sight imaging 实时非视距成像
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214920
Matthew O'Toole, David B. Lindell, Gordon Wetzstein
Non-line-of-sight (NLOS) imaging aims at recovering the shape of objects hidden outside the direct line of sight of a camera. In this work, we report on a new approach for acquiring time-resolved measurements that are suitable for NLOS imaging. The system uses a confocalized single-photon detector and pulsed laser. As opposed to previously-proposed NLOS imaging systems, our setup is very similar to LIDAR systems used for autonomous vehicles and it facilitates a closed-form solution of the associated inverse problem, which we derive in this work. This algorithm, dubbed the Light Cone Transform, is three orders of magnitude faster and more memory efficient than existing methods. We demonstrate experimental results for indoor and outdoor scenes captured and reconstructed with the proposed confocal NLOS imaging system.
非视距成像(NLOS)旨在恢复隐藏在相机直接视距之外的物体的形状。在这项工作中,我们报告了一种获取适合NLOS成像的时间分辨测量的新方法。该系统使用共焦单光子探测器和脉冲激光器。与之前提出的NLOS成像系统相反,我们的设置非常类似于用于自动驾驶汽车的LIDAR系统,它有助于相关逆问题的封闭形式解决方案,这是我们在这项工作中推导出来的。这种算法被称为光锥变换,比现有方法快三个数量级,而且内存效率更高。我们展示了用所提出的共聚焦NLOS成像系统捕获和重建室内和室外场景的实验结果。
{"title":"Real-time non-line-of-sight imaging","authors":"Matthew O'Toole, David B. Lindell, Gordon Wetzstein","doi":"10.1145/3214907.3214920","DOIUrl":"https://doi.org/10.1145/3214907.3214920","url":null,"abstract":"Non-line-of-sight (NLOS) imaging aims at recovering the shape of objects hidden outside the direct line of sight of a camera. In this work, we report on a new approach for acquiring time-resolved measurements that are suitable for NLOS imaging. The system uses a confocalized single-photon detector and pulsed laser. As opposed to previously-proposed NLOS imaging systems, our setup is very similar to LIDAR systems used for autonomous vehicles and it facilitates a closed-form solution of the associated inverse problem, which we derive in this work. This algorithm, dubbed the Light Cone Transform, is three orders of magnitude faster and more memory efficient than existing methods. We demonstrate experimental results for indoor and outdoor scenes captured and reconstructed with the proposed confocal NLOS imaging system.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123335958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Fairlift Fairlift
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214919
Yuji Matsuura, Naoya Koizumi
FairLift is an interaction system involving mid-air images, which are visible to the naked eye under and on a water surface. In this system, the water surface reflects the light from micro-mirror array plates, and a mid-air image appears. The system enables a user to interact with the mid-air image by controlling the image position of a light-source display from the water level measured with an ultrasonic sensor. The contributions of this system are enriching interaction with mid-air images and addressing the limitations of conventional water-display systems.
{"title":"Fairlift","authors":"Yuji Matsuura, Naoya Koizumi","doi":"10.1145/3214907.3214919","DOIUrl":"https://doi.org/10.1145/3214907.3214919","url":null,"abstract":"FairLift is an interaction system involving mid-air images, which are visible to the naked eye under and on a water surface. In this system, the water surface reflects the light from micro-mirror array plates, and a mid-air image appears. The system enables a user to interact with the mid-air image by controlling the image position of a light-source display from the water level measured with an ultrasonic sensor. The contributions of this system are enriching interaction with mid-air images and addressing the limitations of conventional water-display systems.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117346546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Verifocal Verifocal
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214925
Pierre-Yves Laffont, Ali Hasnain, Pierre-Yves Guillemet, Samuel Wirajaya, Joe Khoo, D. Teng, Jean-Charles Bazin
The vergence-accommodation conflict is a fundamental cause of discomfort in today's Virtual and Augmented Reality (VR/AR). We present a novel software platform and hardware for varifocal head-mounted displays (HMDs) to generate consistent accommodation cues and account for the user's prescription. We investigate multiple varifocal optical systems and propose the world's first varifocal mobile HMD based on Alvarez lenses. We also introduce a varifocal rendering pipeline, which corrects for distortion introduced by the optical focus adjustment, approximates retinal blur, incorporates eye tracking and leverages on rendered content to correct noisy eye tracking results. We demonstrate the platform running in compact VR headsets and present initial results in video pass-through AR.
{"title":"Verifocal","authors":"Pierre-Yves Laffont, Ali Hasnain, Pierre-Yves Guillemet, Samuel Wirajaya, Joe Khoo, D. Teng, Jean-Charles Bazin","doi":"10.1145/3214907.3214925","DOIUrl":"https://doi.org/10.1145/3214907.3214925","url":null,"abstract":"The vergence-accommodation conflict is a fundamental cause of discomfort in today's Virtual and Augmented Reality (VR/AR). We present a novel software platform and hardware for varifocal head-mounted displays (HMDs) to generate consistent accommodation cues and account for the user's prescription. We investigate multiple varifocal optical systems and propose the world's first varifocal mobile HMD based on Alvarez lenses. We also introduce a varifocal rendering pipeline, which corrects for distortion introduced by the optical focus adjustment, approximates retinal blur, incorporates eye tracking and leverages on rendered content to correct noisy eye tracking results. We demonstrate the platform running in compact VR headsets and present initial results in video pass-through AR.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130869399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Human support robot (HSR) 人类辅助机器人(HSR)
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3233972
Takashi Yamamoto, Tamaki Nishino, H. Kajima, M. Ohta, Koichi Ikeda
There has been an increasing interest in mobile manipulators that is capable of performing physical work in living spaces worldwide, corresponding to population aging with declining birth rates with the expectation of improving quality of life (QOL). Research and development is a must in intelligent sensing and software which enable advanced recognition, judgment, and motion to realize household work by robots. In order to accelerate this research, we have developed a compact and safe research platform, Human Support Robot (HSR), which can be operated in an actual home environment. We assume that overall R&D will accelerate by using a common robot platform among many researchers since that enables them to share their research results. In this paper, we introduce HSR design and its utilization.
人们对能够在世界范围内的生活空间中进行体力工作的移动机械手越来越感兴趣,这与人口老龄化和出生率下降相对应,人们期望提高生活质量(QOL)。研究和开发智能传感和软件是必须的,使机器人能够高级识别,判断和运动,以实现家务劳动。为了加速这项研究,我们开发了一个紧凑而安全的研究平台,人类支持机器人(HSR),它可以在实际的家庭环境中操作。我们认为,通过在许多研究人员之间使用一个共同的机器人平台,整体研发将加速,因为这使他们能够分享他们的研究成果。本文介绍了高铁的设计及其应用。
{"title":"Human support robot (HSR)","authors":"Takashi Yamamoto, Tamaki Nishino, H. Kajima, M. Ohta, Koichi Ikeda","doi":"10.1145/3214907.3233972","DOIUrl":"https://doi.org/10.1145/3214907.3233972","url":null,"abstract":"There has been an increasing interest in mobile manipulators that is capable of performing physical work in living spaces worldwide, corresponding to population aging with declining birth rates with the expectation of improving quality of life (QOL). Research and development is a must in intelligent sensing and software which enable advanced recognition, judgment, and motion to realize household work by robots. In order to accelerate this research, we have developed a compact and safe research platform, Human Support Robot (HSR), which can be operated in an actual home environment. We assume that overall R&D will accelerate by using a common robot platform among many researchers since that enables them to share their research results. In this paper, we introduce HSR design and its utilization.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128426226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
A full-color single-chip-DLP projector with an embedded 2400-fps homography warping engine 一个全彩色单芯片dlp投影仪与嵌入式2400帧/秒的单字翘曲引擎
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214927
S. Kagami, K. Hashimoto
We demonstrate a 24-bit full-color projector that achieves over 2400-fps motion adaptability to a fast moving planar surface using single-chip DLP technology, which will be useful for projection mapping applications in highly dynamic scenes. The projector can be interfaced with a host PC via standard HDMI and USB without need of high computational burden.
我们展示了一款24位全彩色投影仪,该投影仪采用单芯片DLP技术,对快速移动的平面实现了超过2400帧/秒的运动适应性,这将有助于高动态场景中的投影映射应用。投影仪可以通过标准HDMI和USB与主机PC接口,无需高计算负担。
{"title":"A full-color single-chip-DLP projector with an embedded 2400-fps homography warping engine","authors":"S. Kagami, K. Hashimoto","doi":"10.1145/3214907.3214927","DOIUrl":"https://doi.org/10.1145/3214907.3214927","url":null,"abstract":"We demonstrate a 24-bit full-color projector that achieves over 2400-fps motion adaptability to a fast moving planar surface using single-chip DLP technology, which will be useful for projection mapping applications in highly dynamic scenes. The projector can be interfaced with a host PC via standard HDMI and USB without need of high computational burden.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123140175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Steerable application-adaptive near eye displays 可操纵的应用自适应近眼显示器
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214911
Kishore Rathinavel, Praneeth Chakravarthula, K. Akşit, J. Spjut, Ben Boudaoud, T. Whitted, D. Luebke, H. Fuchs
The design challenges of see-through near-eye displays can be mitigated by specializing an augmented reality device for a particular application. We present a novel optical design for augmented reality near-eye displays exploiting 3D stereolithography printing techniques to achieve similar characteristics to progressive prescription binoculars. We propose to manufacture inter-changeable optical components using 3D printing, leading to arbitrary shaped static projection screen surfaces that are adaptive to the targeted applications. We identify a computational optical design methodology to generate various optical components accordingly, leading to small compute and power demands. To this end, we introduce our augmented reality prototype with a moderate form-factor, large field of view. We have also presented that our prototype is promising high resolutions for a foveation technique using a moving lens in front of a projection system. We believe our display technique provides a gate-way to application-adaptive, easily replicable, customizable, and cost-effective near-eye display designs.
通过专门为特定应用设计增强现实设备,可以减轻透明近眼显示器的设计挑战。我们提出了一种新的光学设计,用于增强现实近眼显示器,利用3D立体光刻印刷技术来实现与渐进处方双筒望远镜相似的特性。我们建议使用3D打印制造可互换的光学元件,从而产生任意形状的静态投影屏幕表面,以适应目标应用。我们确定了一种计算光学设计方法,以相应地生成各种光学元件,从而减少计算和功率需求。为此,我们介绍了我们的增强现实原型,具有中等尺寸,大视野。我们还展示了我们的原型是有希望的高分辨率的注视点技术,使用一个移动的镜头在投影系统前面。我们相信,我们的显示技术为应用自适应、易于复制、可定制和具有成本效益的近眼显示设计提供了一条通道。
{"title":"Steerable application-adaptive near eye displays","authors":"Kishore Rathinavel, Praneeth Chakravarthula, K. Akşit, J. Spjut, Ben Boudaoud, T. Whitted, D. Luebke, H. Fuchs","doi":"10.1145/3214907.3214911","DOIUrl":"https://doi.org/10.1145/3214907.3214911","url":null,"abstract":"The design challenges of see-through near-eye displays can be mitigated by specializing an augmented reality device for a particular application. We present a novel optical design for augmented reality near-eye displays exploiting 3D stereolithography printing techniques to achieve similar characteristics to progressive prescription binoculars. We propose to manufacture inter-changeable optical components using 3D printing, leading to arbitrary shaped static projection screen surfaces that are adaptive to the targeted applications. We identify a computational optical design methodology to generate various optical components accordingly, leading to small compute and power demands. To this end, we introduce our augmented reality prototype with a moderate form-factor, large field of view. We have also presented that our prototype is promising high resolutions for a foveation technique using a moving lens in front of a projection system. We believe our display technique provides a gate-way to application-adaptive, easily replicable, customizable, and cost-effective near-eye display designs.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114220277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
ACM SIGGRAPH 2018 Emerging Technologies ACM SIGGRAPH 2018新兴技术
Pub Date : 2018-08-12 DOI: 10.1145/3214907
{"title":"ACM SIGGRAPH 2018 Emerging Technologies","authors":"","doi":"10.1145/3214907","DOIUrl":"https://doi.org/10.1145/3214907","url":null,"abstract":"","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128183587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM SIGGRAPH 2018 Emerging Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1