首页 > 最新文献

International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality最新文献

英文 中文
Message from the ISMAR 2022 Science and Technology Conference Program Chairs ISMAR 2022科学与技术会议项目主席的致辞
Henry Duh, Ian Williams, Jens Grubert, J. A. Jones, Jianmin Zheng
{"title":"Message from the ISMAR 2022 Science and Technology Conference Program Chairs","authors":"Henry Duh, Ian Williams, Jens Grubert, J. A. Jones, Jianmin Zheng","doi":"10.1109/ismar55827.2022.00006","DOIUrl":"https://doi.org/10.1109/ismar55827.2022.00006","url":null,"abstract":"","PeriodicalId":92225,"journal":{"name":"International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87284095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Keynote Speakers 主旨发言人
H. Fuchs
{"title":"Keynote Speakers","authors":"H. Fuchs","doi":"10.1109/ismar55827.2022.00010","DOIUrl":"https://doi.org/10.1109/ismar55827.2022.00010","url":null,"abstract":"","PeriodicalId":92225,"journal":{"name":"International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75200829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Message from the ISMAR 2020 Workshop and Tutorial Chairs ISMAR 2020研讨会和指导主席致辞
M. Fiorentino, R. Radkowski
{"title":"Message from the ISMAR 2020 Workshop and Tutorial Chairs","authors":"M. Fiorentino, R. Radkowski","doi":"10.1109/ismar50242.2020.00009","DOIUrl":"https://doi.org/10.1109/ismar50242.2020.00009","url":null,"abstract":"","PeriodicalId":92225,"journal":{"name":"International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80003031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Intelligent Augmented Reality Training Framework for Neonatal Endotracheal Intubation. 新生儿气管插管的智能增强现实培训框架。
Shang Zhao, Xiao Xiao, Qiyue Wang, Xiaoke Zhang, Wei Li, Lamia Soghier, James Hahn

Neonatal Endotracheal Intubation (ETI) is a critical resuscitation skill that requires tremendous practice of trainees before clinical exposure. However, current manikin-based training regimen is ineffective in providing satisfactory real-time procedural guidance for accurate assessment due to the lack of see-through visualization within the manikin. The training efficiency is further reduced by the limited availability of expert instructors, which inevitably results in a long learning curve for trainees. To this end, we propose an intelligent Augmented Reality (AR) training framework that provides trainees with a complete visualization of the ETI procedure for real-time guidance and assessment. Specifically, the proposed framework is capable of capturing the motions of the laryngoscope and the manikin and offer 3D see-through visualization rendered to the head-mounted display (HMD). Furthermore, an attention-based Convolutional Neural Network (CNN) model is developed to automatically assess the ETI performance from the captured motions as well as identify regions of motions that significantly contribute to the performance evaluation. Lastly, augmented user-friendly feedback is delivered with interpretable results with the ETI scoring rubric through the color-coded motion trajectory that classifies highlighted regions that need more practice. The classification accuracy of our machine learning model is 84.6%.

新生儿气管插管(ETI)是一项关键的复苏技能,需要受训者在临床接触前进行大量练习。然而,由于人体模型内缺乏透视,目前基于人体模型的培训方案无法为准确评估提供令人满意的实时程序指导。而专家指导员的有限性又进一步降低了培训效率,这不可避免地导致受训者学习曲线过长。为此,我们提出了一种智能增强现实(AR)培训框架,为受训者提供完整的 ETI 过程可视化,以便进行实时指导和评估。具体来说,所提出的框架能够捕捉喉镜和人体模型的运动,并将三维透视可视化渲染到头戴式显示器(HMD)上。此外,还开发了一个基于注意力的卷积神经网络(CNN)模型,用于从捕捉到的运动中自动评估 ETI 性能,并识别对性能评估有重大贡献的运动区域。最后,通过彩色编码的运动轨迹对需要更多练习的高亮区域进行分类,以 ETI 评分标准提供可解释结果的增强型用户友好反馈。我们的机器学习模型的分类准确率为 84.6%。
{"title":"An Intelligent Augmented Reality Training Framework for Neonatal Endotracheal Intubation.","authors":"Shang Zhao, Xiao Xiao, Qiyue Wang, Xiaoke Zhang, Wei Li, Lamia Soghier, James Hahn","doi":"10.1109/ismar50242.2020.00097","DOIUrl":"10.1109/ismar50242.2020.00097","url":null,"abstract":"<p><p>Neonatal Endotracheal Intubation (ETI) is a critical resuscitation skill that requires tremendous practice of trainees before clinical exposure. However, current manikin-based training regimen is ineffective in providing satisfactory real-time procedural guidance for accurate assessment due to the lack of see-through visualization within the manikin. The training efficiency is further reduced by the limited availability of expert instructors, which inevitably results in a long learning curve for trainees. To this end, we propose an intelligent Augmented Reality (AR) training framework that provides trainees with a complete visualization of the ETI procedure for real-time guidance and assessment. Specifically, the proposed framework is capable of capturing the motions of the laryngoscope and the manikin and offer 3D see-through visualization rendered to the head-mounted display (HMD). Furthermore, an attention-based Convolutional Neural Network (CNN) model is developed to automatically assess the ETI performance from the captured motions as well as identify regions of motions that significantly contribute to the performance evaluation. Lastly, augmented user-friendly feedback is delivered with interpretable results with the ETI scoring rubric through the color-coded motion trajectory that classifies highlighted regions that need more practice. The classification accuracy of our machine learning model is 84.6%.</p>","PeriodicalId":92225,"journal":{"name":"International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8084704/pdf/nihms-1692008.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38949397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AR4VI: AR as an Accessibility Tool for People with Visual Impairments. AR4VI:AR作为视觉障碍人士的无障碍工具。
James M Coughlan, Joshua Miele

Although AR technology has been largely dominated by visual media, a number of AR tools using both visual and auditory feedback have been developed specifically to assist people with low vision or blindness - an application domain that we term Augmented Reality for Visual Impairment (AR4VI). We describe two AR4VI tools developed at Smith-Kettlewell, as well as a number of pre-existing examples. We emphasize that AR4VI is a powerful tool with the potential to remove or significantly reduce a range of accessibility barriers. Rather than being restricted to use by people with visual impairments, AR4VI is a compelling universal design approach offering benefits for mainstream applications as well.

尽管增强现实技术在很大程度上由视觉媒体所主导,但一些同时使用视觉和听觉反馈的增强现实工具已被开发出来,专门用于帮助低视力或失明人士--我们将这一应用领域称为视觉障碍增强现实(AR4VI)。我们将介绍史密斯-凯特威尔公司开发的两种 AR4VI 工具,以及一些已有的示例。我们强调,AR4VI 是一种强大的工具,具有消除或显著减少一系列无障碍障碍的潜力。AR4VI 并不局限于供视觉障碍人士使用,而是一种引人注目的通用设计方法,可为主流应用带来益处。
{"title":"AR4VI: AR as an Accessibility Tool for People with Visual Impairments.","authors":"James M Coughlan, Joshua Miele","doi":"10.1109/ISMAR-Adjunct.2017.89","DOIUrl":"10.1109/ISMAR-Adjunct.2017.89","url":null,"abstract":"<p><p>Although AR technology has been largely dominated by visual media, a number of AR tools using both visual and auditory feedback have been developed specifically to assist people with low vision or blindness - an application domain that we term Augmented Reality for Visual Impairment (AR4VI). We describe two AR4VI tools developed at Smith-Kettlewell, as well as a number of pre-existing examples. We emphasize that AR4VI is a powerful tool with the potential to remove or significantly reduce a range of accessibility barriers. Rather than being restricted to use by people with visual impairments, AR4VI is a compelling universal design approach offering benefits for mainstream applications as well.</p>","PeriodicalId":92225,"journal":{"name":"International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5749423/pdf/nihms926801.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35710807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tutorial 4: AImplementations in Informal Learning 教程4:非正式学习中的实施
Eric Hawkinson, M. Stack, Jay Klaphake, S. Jacoby
A variety of cases uses of AR in informal learning environments. The cases uses are drawn from a variety of different contexts. There will be examples of AR use in education, tourism, event organizing, and others. This is mainly geared to people creating learning environments in any industry a foundation to start implementation AR. The featured case use will be how AR was used at TEDxKyoto to engage participants. There will also be several student projects that use AR presented and available for demo.
在非正式学习环境中使用AR的各种案例。案例使用来自各种不同的上下文中。将会有AR应用于教育、旅游、活动组织等领域的例子。这主要是针对在任何行业中创建学习环境的人,这是开始实施AR的基础。特色案例使用将是如何在TEDxKyoto上使用AR来吸引参与者。也会有几个使用AR的学生项目进行展示和演示。
{"title":"Tutorial 4: AImplementations in Informal Learning","authors":"Eric Hawkinson, M. Stack, Jay Klaphake, S. Jacoby","doi":"10.1109/ISMAR.2015.73","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.73","url":null,"abstract":"A variety of cases uses of AR in informal learning environments. The cases uses are drawn from a variety of different contexts. There will be examples of AR use in education, tourism, event organizing, and others. This is mainly geared to people creating learning environments in any industry a foundation to start implementation AR. The featured case use will be how AR was used at TEDxKyoto to engage participants. There will also be several student projects that use AR presented and available for demo.","PeriodicalId":92225,"journal":{"name":"International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87519212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tutorial 1: Global-scale Localization in Outdoor Environments for AR 教程1:AR户外环境中的全球尺度定位
Clemens Arth, D. Schmalstieg
In this tutorial we aim for a review of existing technologies to perform outdoor localization in urban environments at a global level in full 6DOF using visual sensors primarily. The goal is to provide a clear overview about the current state-of-the-art in global positioning and orientation estimation, which includes a wide range of methods and algorithms from both the Computer Vision and the Augmented Reality community.  The main focus is put on methods that are real-time capable, or can at least be applied through a server-client infrastructure. Algorithms that are based on single images, panoramic images, as well as SLAM maps and sparse point cloud reconstructions from SfM will be discussed, together with mobile hardware considerations.The attendees will acquire an overview about the current landscape of technologies employed to facilitate outdoor localization for AR. The tutorial should enable them to get a feeling for the current state-of-the-art of methods for outdoor Augmented Reality.
在本教程中,我们的目标是回顾现有技术,主要使用视觉传感器在全球范围内以全6DOF的方式在城市环境中进行户外定位。目标是提供关于当前全球定位和方向估计的最新技术的清晰概述,其中包括来自计算机视觉和增强现实社区的广泛方法和算法。主要关注的是能够实时的方法,或者至少可以通过服务器-客户机基础结构应用的方法。将讨论基于单幅图像、全景图像以及SLAM地图和SfM稀疏点云重建的算法,以及移动硬件考虑因素。与会者将获得当前用于促进AR户外定位的技术概况。该教程将使他们对当前户外增强现实技术的最新技术有一个感觉。
{"title":"Tutorial 1: Global-scale Localization in Outdoor Environments for AR","authors":"Clemens Arth, D. Schmalstieg","doi":"10.1109/ISMAR.2015.72","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.72","url":null,"abstract":"In this tutorial we aim for a review of existing technologies to perform outdoor localization in urban environments at a global level in full 6DOF using visual sensors primarily. The goal is to provide a clear overview about the current state-of-the-art in global positioning and orientation estimation, which includes a wide range of methods and algorithms from both the Computer Vision and the Augmented Reality community.  The main focus is put on methods that are real-time capable, or can at least be applied through a server-client infrastructure. Algorithms that are based on single images, panoramic images, as well as SLAM maps and sparse point cloud reconstructions from SfM will be discussed, together with mobile hardware considerations.The attendees will acquire an overview about the current landscape of technologies employed to facilitate outdoor localization for AR. The tutorial should enable them to get a feeling for the current state-of-the-art of methods for outdoor Augmented Reality.","PeriodicalId":92225,"journal":{"name":"International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74079720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Tutorial 2: Computational Imaging and Projection 教程2:计算成像和投影
S. Hiura, H. Nagahara, D. Iwai, Toshiyuki Amano
In this tutorial, we will introduce emerging technologies on computational imaging and light field projection to AR/MR researchers.Light is the most important medium in AR/VR technologies to not only obtain information but also show and modify visual cue in the real scenes. Therefore in this area, latest techniques on optics, imaging and lighting have played an important role to make a next step toward the sophisticated experiences. Computational photography is one of the most influential technology in computer vision and optical engineering areas, and we think most techniques in computational imaging and projection can be applied to common problems in mixed reality, such as scene modeling, modification of the appearances of actual objects and user interactions.
在本教程中,我们将向AR/MR研究人员介绍计算成像和光场投影方面的新兴技术。光是AR/VR技术中获取信息、显示和修改真实场景中视觉线索的最重要媒介。因此,在这一领域,最新的光学、成像和照明技术发挥了重要作用,使我们朝着复杂的体验迈出了下一步。计算摄影是计算机视觉和光学工程领域最具影响力的技术之一,我们认为计算成像和投影中的大多数技术都可以应用于混合现实中的常见问题,如场景建模、实际物体外观的修改和用户交互。
{"title":"Tutorial 2: Computational Imaging and Projection","authors":"S. Hiura, H. Nagahara, D. Iwai, Toshiyuki Amano","doi":"10.1109/ISMAR.2015.71","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.71","url":null,"abstract":"In this tutorial, we will introduce emerging technologies on computational imaging and light field projection to AR/MR researchers.Light is the most important medium in AR/VR technologies to not only obtain information but also show and modify visual cue in the real scenes. Therefore in this area, latest techniques on optics, imaging and lighting have played an important role to make a next step toward the sophisticated experiences. Computational photography is one of the most influential technology in computer vision and optical engineering areas, and we think most techniques in computational imaging and projection can be applied to common problems in mixed reality, such as scene modeling, modification of the appearances of actual objects and user interactions.","PeriodicalId":92225,"journal":{"name":"International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87828311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tutorial 3: Intelligent User Interfaces 教程3:智能用户界面
Daniel Sonntag
IUI - Intelligent User Interfaces: will introduce you to the design and implementation of Intelligent User Interfaces (IUIs). IUIs aim to incorporate intelligent automated capabilities in human computer interaction, where the net impact is a human-computer interaction that improves performance or usability in critical ways. It also involves designing and implementing an artificial intelligence (AI) component that effectively leverages human skills and capabilities, so that human performance with an application excels. IUIs embody capabilities that have traditionally been associated more strongly with humans than with computers: how to perceive, interpret, learn, use language, reason, plan, and decide.
IUI -智能用户界面:将向您介绍智能用户界面(IUI)的设计和实现。iui的目标是在人机交互中整合智能自动化功能,其净影响是人机交互,在关键方面提高性能或可用性。它还涉及设计和实现一个人工智能(AI)组件,该组件有效地利用了人类的技能和能力,从而使人类在应用程序中的表现出色。人工智能体现了传统上与人类联系更紧密的能力,而不是与计算机联系在一起的能力:如何感知、解释、学习、使用语言、推理、计划和决定。
{"title":"Tutorial 3: Intelligent User Interfaces","authors":"Daniel Sonntag","doi":"10.1109/ISMAR.2015.74","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.74","url":null,"abstract":"IUI - Intelligent User Interfaces: will introduce you to the design and implementation of Intelligent User Interfaces (IUIs). IUIs aim to incorporate intelligent automated capabilities in human computer interaction, where the net impact is a human-computer interaction that improves performance or usability in critical ways. It also involves designing and implementing an artificial intelligence (AI) component that effectively leverages human skills and capabilities, so that human performance with an application excels. IUIs embody capabilities that have traditionally been associated more strongly with humans than with computers: how to perceive, interpret, learn, use language, reason, plan, and decide.","PeriodicalId":92225,"journal":{"name":"International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81012322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Insight: Webized mobile AR and real-life use cases 洞察:网络化移动增强现实和现实生活用例
S. Ahn, Joohyun Lee, Jinwook Kim, Sungkuk Chun, Jungbin Kim, Iltae Kim, Junsik Shim, Byounghyun Yoo, H. Ko
{"title":"Insight: Webized mobile AR and real-life use cases","authors":"S. Ahn, Joohyun Lee, Jinwook Kim, Sungkuk Chun, Jungbin Kim, Iltae Kim, Junsik Shim, Byounghyun Yoo, H. Ko","doi":"10.1109/ISMAR.2014.6948471","DOIUrl":"https://doi.org/10.1109/ISMAR.2014.6948471","url":null,"abstract":"","PeriodicalId":92225,"journal":{"name":"International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85682968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1