首页 > 最新文献

ACM SIGGRAPH 2018 Emerging Technologies最新文献

英文 中文
SEER 先见
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214921
Takayuki Todo
SEER (Simulative Emotional Expression Robot) is an animatronic humanoid robot that generates gaze and emotional facial expressions to improve animativity, lifelikeness, and impresssiveness by the integrated design of modeling, mechanism, materials, and computing. The robot can simulated a user?s movement, gaze, and facial expressions detected by a camera sensor. This system can be applied to puppetry, telepresence avatar, and interactive automation.
{"title":"SEER","authors":"Takayuki Todo","doi":"10.1145/3214907.3214921","DOIUrl":"https://doi.org/10.1145/3214907.3214921","url":null,"abstract":"SEER (Simulative Emotional Expression Robot) is an animatronic humanoid robot that generates gaze and emotional facial expressions to improve animativity, lifelikeness, and impresssiveness by the integrated design of modeling, mechanism, materials, and computing. The robot can simulated a user?s movement, gaze, and facial expressions detected by a camera sensor. This system can be applied to puppetry, telepresence avatar, and interactive automation.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132869200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Hands-free augmented reality for vascular interventions 免手增强现实血管介入
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3236462
A. Grinshpoon, S. Sadri, Gabrielle J. Loeb, Carmine Elvezio, S. Siu, Steven K. Feiner
During a vascular intervention (a type of minimally invasive surgical procedure), physicians maneuver catheters and wires through a patient's blood vessels to reach a desired location in the body. Since the relevant anatomy is typically not directly visible in these procedures, virtual reality and augmented reality systems have been developed to assist in 3D navigation. Because both of a physician's hands may already be occupied, we developed an augmented reality system supporting hands-free interaction techniques that use voice and head tracking to enable the physician to interact with 3D virtual content on a head-worn display while leaving both hands available intraoperatively. We demonstrate how a virtual 3D anatomical model can be rotated and scaled using small head rotations through first-order (rate) control, and can be rigidly coupled to the head for combined translation and rotation through zero-order control. This enables easy manipulation of a model while it stays close to the center of the physician's field of view.
在血管介入(一种微创外科手术)过程中,医生将导管和导线穿过患者的血管到达体内所需的位置。由于在这些过程中通常不能直接看到相关的解剖结构,因此已经开发了虚拟现实和增强现实系统来辅助3D导航。由于医生的双手可能已经被占用,我们开发了一种增强现实系统,支持免提交互技术,使用语音和头部跟踪,使医生能够与头戴式显示器上的3D虚拟内容进行交互,同时在术中双手可用。我们演示了虚拟3D解剖模型如何通过一阶(速率)控制使用小头部旋转旋转和缩放,并且可以通过零阶控制刚性耦合到头部进行组合平移和旋转。这使得易于操作的模型,而它保持接近医生的视野中心。
{"title":"Hands-free augmented reality for vascular interventions","authors":"A. Grinshpoon, S. Sadri, Gabrielle J. Loeb, Carmine Elvezio, S. Siu, Steven K. Feiner","doi":"10.1145/3214907.3236462","DOIUrl":"https://doi.org/10.1145/3214907.3236462","url":null,"abstract":"During a vascular intervention (a type of minimally invasive surgical procedure), physicians maneuver catheters and wires through a patient's blood vessels to reach a desired location in the body. Since the relevant anatomy is typically not directly visible in these procedures, virtual reality and augmented reality systems have been developed to assist in 3D navigation. Because both of a physician's hands may already be occupied, we developed an augmented reality system supporting hands-free interaction techniques that use voice and head tracking to enable the physician to interact with 3D virtual content on a head-worn display while leaving both hands available intraoperatively. We demonstrate how a virtual 3D anatomical model can be rotated and scaled using small head rotations through first-order (rate) control, and can be rigidly coupled to the head for combined translation and rotation through zero-order control. This enables easy manipulation of a model while it stays close to the center of the physician's field of view.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131925225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Leviopole
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214913
Tomoya Sasaki, R. S. Hartanto, Kao-Hua Liu, Keitaro Tsuchiya, Atsushi Hiyama, Masahiko Inami
We present LevioPole, a rod-like device that provides mid-air haptic feedback for full-body interaction in virtual reality, augmented reality, or other daily activities. The device is constructed from two rotor units, which are designed using propellers, motors, speed controllers, batteries, and sensors, allowing portability and ease of use. Having each group of rotor units on both ends of the pole, these rotors generate both rotational and linear forces that can be driven according to the target application. In this paper, we introduce example applications in both VR and physical environment; embodied gaming with haptic feedback and walking navigation in a specific direction.
{"title":"Leviopole","authors":"Tomoya Sasaki, R. S. Hartanto, Kao-Hua Liu, Keitaro Tsuchiya, Atsushi Hiyama, Masahiko Inami","doi":"10.1145/3214907.3214913","DOIUrl":"https://doi.org/10.1145/3214907.3214913","url":null,"abstract":"We present LevioPole, a rod-like device that provides mid-air haptic feedback for full-body interaction in virtual reality, augmented reality, or other daily activities. The device is constructed from two rotor units, which are designed using propellers, motors, speed controllers, batteries, and sensors, allowing portability and ease of use. Having each group of rotor units on both ends of the pole, these rotors generate both rotational and linear forces that can be driven according to the target application. In this paper, we introduce example applications in both VR and physical environment; embodied gaming with haptic feedback and walking navigation in a specific direction.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121019841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Real-time non-line-of-sight imaging 实时非视距成像
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214920
Matthew O'Toole, David B. Lindell, Gordon Wetzstein
Non-line-of-sight (NLOS) imaging aims at recovering the shape of objects hidden outside the direct line of sight of a camera. In this work, we report on a new approach for acquiring time-resolved measurements that are suitable for NLOS imaging. The system uses a confocalized single-photon detector and pulsed laser. As opposed to previously-proposed NLOS imaging systems, our setup is very similar to LIDAR systems used for autonomous vehicles and it facilitates a closed-form solution of the associated inverse problem, which we derive in this work. This algorithm, dubbed the Light Cone Transform, is three orders of magnitude faster and more memory efficient than existing methods. We demonstrate experimental results for indoor and outdoor scenes captured and reconstructed with the proposed confocal NLOS imaging system.
非视距成像(NLOS)旨在恢复隐藏在相机直接视距之外的物体的形状。在这项工作中,我们报告了一种获取适合NLOS成像的时间分辨测量的新方法。该系统使用共焦单光子探测器和脉冲激光器。与之前提出的NLOS成像系统相反,我们的设置非常类似于用于自动驾驶汽车的LIDAR系统,它有助于相关逆问题的封闭形式解决方案,这是我们在这项工作中推导出来的。这种算法被称为光锥变换,比现有方法快三个数量级,而且内存效率更高。我们展示了用所提出的共聚焦NLOS成像系统捕获和重建室内和室外场景的实验结果。
{"title":"Real-time non-line-of-sight imaging","authors":"Matthew O'Toole, David B. Lindell, Gordon Wetzstein","doi":"10.1145/3214907.3214920","DOIUrl":"https://doi.org/10.1145/3214907.3214920","url":null,"abstract":"Non-line-of-sight (NLOS) imaging aims at recovering the shape of objects hidden outside the direct line of sight of a camera. In this work, we report on a new approach for acquiring time-resolved measurements that are suitable for NLOS imaging. The system uses a confocalized single-photon detector and pulsed laser. As opposed to previously-proposed NLOS imaging systems, our setup is very similar to LIDAR systems used for autonomous vehicles and it facilitates a closed-form solution of the associated inverse problem, which we derive in this work. This algorithm, dubbed the Light Cone Transform, is three orders of magnitude faster and more memory efficient than existing methods. We demonstrate experimental results for indoor and outdoor scenes captured and reconstructed with the proposed confocal NLOS imaging system.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123335958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Fairlift Fairlift
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214919
Yuji Matsuura, Naoya Koizumi
FairLift is an interaction system involving mid-air images, which are visible to the naked eye under and on a water surface. In this system, the water surface reflects the light from micro-mirror array plates, and a mid-air image appears. The system enables a user to interact with the mid-air image by controlling the image position of a light-source display from the water level measured with an ultrasonic sensor. The contributions of this system are enriching interaction with mid-air images and addressing the limitations of conventional water-display systems.
{"title":"Fairlift","authors":"Yuji Matsuura, Naoya Koizumi","doi":"10.1145/3214907.3214919","DOIUrl":"https://doi.org/10.1145/3214907.3214919","url":null,"abstract":"FairLift is an interaction system involving mid-air images, which are visible to the naked eye under and on a water surface. In this system, the water surface reflects the light from micro-mirror array plates, and a mid-air image appears. The system enables a user to interact with the mid-air image by controlling the image position of a light-source display from the water level measured with an ultrasonic sensor. The contributions of this system are enriching interaction with mid-air images and addressing the limitations of conventional water-display systems.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117346546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Verifocal Verifocal
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214925
Pierre-Yves Laffont, Ali Hasnain, Pierre-Yves Guillemet, Samuel Wirajaya, Joe Khoo, D. Teng, Jean-Charles Bazin
The vergence-accommodation conflict is a fundamental cause of discomfort in today's Virtual and Augmented Reality (VR/AR). We present a novel software platform and hardware for varifocal head-mounted displays (HMDs) to generate consistent accommodation cues and account for the user's prescription. We investigate multiple varifocal optical systems and propose the world's first varifocal mobile HMD based on Alvarez lenses. We also introduce a varifocal rendering pipeline, which corrects for distortion introduced by the optical focus adjustment, approximates retinal blur, incorporates eye tracking and leverages on rendered content to correct noisy eye tracking results. We demonstrate the platform running in compact VR headsets and present initial results in video pass-through AR.
{"title":"Verifocal","authors":"Pierre-Yves Laffont, Ali Hasnain, Pierre-Yves Guillemet, Samuel Wirajaya, Joe Khoo, D. Teng, Jean-Charles Bazin","doi":"10.1145/3214907.3214925","DOIUrl":"https://doi.org/10.1145/3214907.3214925","url":null,"abstract":"The vergence-accommodation conflict is a fundamental cause of discomfort in today's Virtual and Augmented Reality (VR/AR). We present a novel software platform and hardware for varifocal head-mounted displays (HMDs) to generate consistent accommodation cues and account for the user's prescription. We investigate multiple varifocal optical systems and propose the world's first varifocal mobile HMD based on Alvarez lenses. We also introduce a varifocal rendering pipeline, which corrects for distortion introduced by the optical focus adjustment, approximates retinal blur, incorporates eye tracking and leverages on rendered content to correct noisy eye tracking results. We demonstrate the platform running in compact VR headsets and present initial results in video pass-through AR.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130869399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Human support robot (HSR) 人类辅助机器人(HSR)
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3233972
Takashi Yamamoto, Tamaki Nishino, H. Kajima, M. Ohta, Koichi Ikeda
There has been an increasing interest in mobile manipulators that is capable of performing physical work in living spaces worldwide, corresponding to population aging with declining birth rates with the expectation of improving quality of life (QOL). Research and development is a must in intelligent sensing and software which enable advanced recognition, judgment, and motion to realize household work by robots. In order to accelerate this research, we have developed a compact and safe research platform, Human Support Robot (HSR), which can be operated in an actual home environment. We assume that overall R&D will accelerate by using a common robot platform among many researchers since that enables them to share their research results. In this paper, we introduce HSR design and its utilization.
人们对能够在世界范围内的生活空间中进行体力工作的移动机械手越来越感兴趣,这与人口老龄化和出生率下降相对应,人们期望提高生活质量(QOL)。研究和开发智能传感和软件是必须的,使机器人能够高级识别,判断和运动,以实现家务劳动。为了加速这项研究,我们开发了一个紧凑而安全的研究平台,人类支持机器人(HSR),它可以在实际的家庭环境中操作。我们认为,通过在许多研究人员之间使用一个共同的机器人平台,整体研发将加速,因为这使他们能够分享他们的研究成果。本文介绍了高铁的设计及其应用。
{"title":"Human support robot (HSR)","authors":"Takashi Yamamoto, Tamaki Nishino, H. Kajima, M. Ohta, Koichi Ikeda","doi":"10.1145/3214907.3233972","DOIUrl":"https://doi.org/10.1145/3214907.3233972","url":null,"abstract":"There has been an increasing interest in mobile manipulators that is capable of performing physical work in living spaces worldwide, corresponding to population aging with declining birth rates with the expectation of improving quality of life (QOL). Research and development is a must in intelligent sensing and software which enable advanced recognition, judgment, and motion to realize household work by robots. In order to accelerate this research, we have developed a compact and safe research platform, Human Support Robot (HSR), which can be operated in an actual home environment. We assume that overall R&D will accelerate by using a common robot platform among many researchers since that enables them to share their research results. In this paper, we introduce HSR design and its utilization.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128426226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
A full-color single-chip-DLP projector with an embedded 2400-fps homography warping engine 一个全彩色单芯片dlp投影仪与嵌入式2400帧/秒的单字翘曲引擎
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214927
S. Kagami, K. Hashimoto
We demonstrate a 24-bit full-color projector that achieves over 2400-fps motion adaptability to a fast moving planar surface using single-chip DLP technology, which will be useful for projection mapping applications in highly dynamic scenes. The projector can be interfaced with a host PC via standard HDMI and USB without need of high computational burden.
我们展示了一款24位全彩色投影仪,该投影仪采用单芯片DLP技术,对快速移动的平面实现了超过2400帧/秒的运动适应性,这将有助于高动态场景中的投影映射应用。投影仪可以通过标准HDMI和USB与主机PC接口,无需高计算负担。
{"title":"A full-color single-chip-DLP projector with an embedded 2400-fps homography warping engine","authors":"S. Kagami, K. Hashimoto","doi":"10.1145/3214907.3214927","DOIUrl":"https://doi.org/10.1145/3214907.3214927","url":null,"abstract":"We demonstrate a 24-bit full-color projector that achieves over 2400-fps motion adaptability to a fast moving planar surface using single-chip DLP technology, which will be useful for projection mapping applications in highly dynamic scenes. The projector can be interfaced with a host PC via standard HDMI and USB without need of high computational burden.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123140175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
ACM SIGGRAPH 2018 Emerging Technologies ACM SIGGRAPH 2018新兴技术
Pub Date : 2018-08-12 DOI: 10.1145/3214907
{"title":"ACM SIGGRAPH 2018 Emerging Technologies","authors":"","doi":"10.1145/3214907","DOIUrl":"https://doi.org/10.1145/3214907","url":null,"abstract":"","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128183587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Steerable application-adaptive near eye displays 可操纵的应用自适应近眼显示器
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214911
Kishore Rathinavel, Praneeth Chakravarthula, K. Akşit, J. Spjut, Ben Boudaoud, T. Whitted, D. Luebke, H. Fuchs
The design challenges of see-through near-eye displays can be mitigated by specializing an augmented reality device for a particular application. We present a novel optical design for augmented reality near-eye displays exploiting 3D stereolithography printing techniques to achieve similar characteristics to progressive prescription binoculars. We propose to manufacture inter-changeable optical components using 3D printing, leading to arbitrary shaped static projection screen surfaces that are adaptive to the targeted applications. We identify a computational optical design methodology to generate various optical components accordingly, leading to small compute and power demands. To this end, we introduce our augmented reality prototype with a moderate form-factor, large field of view. We have also presented that our prototype is promising high resolutions for a foveation technique using a moving lens in front of a projection system. We believe our display technique provides a gate-way to application-adaptive, easily replicable, customizable, and cost-effective near-eye display designs.
通过专门为特定应用设计增强现实设备,可以减轻透明近眼显示器的设计挑战。我们提出了一种新的光学设计,用于增强现实近眼显示器,利用3D立体光刻印刷技术来实现与渐进处方双筒望远镜相似的特性。我们建议使用3D打印制造可互换的光学元件,从而产生任意形状的静态投影屏幕表面,以适应目标应用。我们确定了一种计算光学设计方法,以相应地生成各种光学元件,从而减少计算和功率需求。为此,我们介绍了我们的增强现实原型,具有中等尺寸,大视野。我们还展示了我们的原型是有希望的高分辨率的注视点技术,使用一个移动的镜头在投影系统前面。我们相信,我们的显示技术为应用自适应、易于复制、可定制和具有成本效益的近眼显示设计提供了一条通道。
{"title":"Steerable application-adaptive near eye displays","authors":"Kishore Rathinavel, Praneeth Chakravarthula, K. Akşit, J. Spjut, Ben Boudaoud, T. Whitted, D. Luebke, H. Fuchs","doi":"10.1145/3214907.3214911","DOIUrl":"https://doi.org/10.1145/3214907.3214911","url":null,"abstract":"The design challenges of see-through near-eye displays can be mitigated by specializing an augmented reality device for a particular application. We present a novel optical design for augmented reality near-eye displays exploiting 3D stereolithography printing techniques to achieve similar characteristics to progressive prescription binoculars. We propose to manufacture inter-changeable optical components using 3D printing, leading to arbitrary shaped static projection screen surfaces that are adaptive to the targeted applications. We identify a computational optical design methodology to generate various optical components accordingly, leading to small compute and power demands. To this end, we introduce our augmented reality prototype with a moderate form-factor, large field of view. We have also presented that our prototype is promising high resolutions for a foveation technique using a moving lens in front of a projection system. We believe our display technique provides a gate-way to application-adaptive, easily replicable, customizable, and cost-effective near-eye display designs.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114220277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
ACM SIGGRAPH 2018 Emerging Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1