首页 > 最新文献

2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)最新文献

英文 中文
An Empirical Model for Specularity Prediction with Application to Dynamic Retexturing 镜面预测的经验模型及其在动态纹理中的应用
Pub Date : 2016-09-01 DOI: 10.1109/ISMAR.2016.13
Alexandre Morgand, M. Tamaazousti, A. Bartoli
Specularities, which are often visible in images, may be problematic in computer vision since they depend on parameters which are difficult to estimate in practice. We present an empirical model called JOLIMAS: JOint LIght-MAterial Specularity, which allows specularity prediction. JOLIMAS is reconstructed from images of specular reflections observed on a planar surface and implicitly includes light and material properties which are intrinsic to specularities. This work was motivated by the observation that specularities have a conic shape on planar surfaces. A theoretical study on the well known illumination models of Phong and Blinn-Phong was conducted to support the accuracy of this hypothesis. A conic shape is obtained by projecting a quadric on a planar surface. We showed empirically the existence of a fixed quadric whose perspective projection fits the conic shaped specularity in the associated image. JOLIMAS predicts the complex phenomenon of specularity using a simple geometric approach with static parameters on the object material and on the light source shape. It is adapted to indoor light sources such as light bulbs or fluorescent lamps. The performance of the prediction was convincing on synthetic and real sequences. Additionally, we used the specularity prediction for dynamic retexturing and obtained convincing rendering results. Further results are presented as supplementary material.
在图像中经常可见的镜面在计算机视觉中可能存在问题,因为它们依赖于在实践中难以估计的参数。我们提出了一个称为JOLIMAS的经验模型:联合光-材料镜面,它允许镜面预测。JOLIMAS是根据在平面上观察到的镜面反射图像重建的,它隐含地包括了反射性固有的光和材料属性。这项工作的动机是观察到镜面在平面上具有圆锥形状。对Phong和Blinn-Phong的著名照明模型进行了理论研究,以支持这一假设的准确性。通过在平面上投影二次曲面得到二次曲线。我们从经验上证明了一个固定的二次曲面的存在,它的透视投影符合相关图像中的圆锥形状的镜面。JOLIMAS利用物体材料和光源形状的静态参数,使用简单的几何方法来预测复杂的镜面现象。它适用于室内光源,如灯泡或荧光灯。对合成序列和真实序列的预测效果令人信服。此外,我们还利用镜面预测进行动态纹理重建,获得了令人信服的渲染结果。进一步的结果作为补充材料提出。
{"title":"An Empirical Model for Specularity Prediction with Application to Dynamic Retexturing","authors":"Alexandre Morgand, M. Tamaazousti, A. Bartoli","doi":"10.1109/ISMAR.2016.13","DOIUrl":"https://doi.org/10.1109/ISMAR.2016.13","url":null,"abstract":"Specularities, which are often visible in images, may be problematic in computer vision since they depend on parameters which are difficult to estimate in practice. We present an empirical model called JOLIMAS: JOint LIght-MAterial Specularity, which allows specularity prediction. JOLIMAS is reconstructed from images of specular reflections observed on a planar surface and implicitly includes light and material properties which are intrinsic to specularities. This work was motivated by the observation that specularities have a conic shape on planar surfaces. A theoretical study on the well known illumination models of Phong and Blinn-Phong was conducted to support the accuracy of this hypothesis. A conic shape is obtained by projecting a quadric on a planar surface. We showed empirically the existence of a fixed quadric whose perspective projection fits the conic shaped specularity in the associated image. JOLIMAS predicts the complex phenomenon of specularity using a simple geometric approach with static parameters on the object material and on the light source shape. It is adapted to indoor light sources such as light bulbs or fluorescent lamps. The performance of the prediction was convincing on synthetic and real sequences. Additionally, we used the specularity prediction for dynamic retexturing and obtained convincing rendering results. Further results are presented as supplementary material.","PeriodicalId":146808,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131586407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Instant Mixed Reality Lighting from Casual Scanning 即时混合现实照明从休闲扫描
Pub Date : 2016-09-01 DOI: 10.1109/ISMAR.2016.18
Thomas Richter-Trummer, Denis Kalkofen, Jinwoo Park, D. Schmalstieg
We present a method for recovering both incident lighting and surface materials from casually scanned geometry. By casual, we mean a rapid and potentially noisy scanning procedure of unmodified and uninstrumented scenes with a commodity RGB-D sensor. In other words, unlike reconstruction procedures which require careful preparations in a laboratory environment, our method works with input that can be obtained by consumer users. To ensure a robust procedure, we segment the reconstructed geometry into surfaces with homogeneous material properties and compute the radiance transfer on these segments. With this input, we solve the inverse rendering problem of factorization into lighting and material properties using an iterative optimization in spherical harmonics form. This allows us to account for self-shadowing and recover specular properties. The resulting data can be used to generate a wide range of mixed reality applications, including the rendering of synthetic objects with matching lighting into a given scene, but also re-rendering the scene (or a part of it) with new lighting. We show the robustness of our approach with real and synthetic examples under a variety of lighting conditions and compare them with ground truth data.
我们提出了一种从随机扫描几何中恢复入射光和表面材料的方法。通过随意,我们指的是使用商用RGB-D传感器对未经修改和未仪表化的场景进行快速且可能有噪声的扫描过程。换句话说,与需要在实验室环境中仔细准备的重建程序不同,我们的方法可以使用消费者用户可以获得的输入。为了确保程序的鲁棒性,我们将重建的几何形状分割成具有均匀材料属性的表面,并计算这些部分上的辐射传递。有了这个输入,我们使用球面谐波形式的迭代优化来解决光照和材料属性分解的逆渲染问题。这允许我们考虑自阴影和恢复镜面属性。生成的数据可用于生成广泛的混合现实应用,包括在给定场景中渲染具有匹配照明的合成对象,但也可以使用新的照明重新渲染场景(或其中的一部分)。我们用各种光照条件下的真实和合成示例展示了我们的方法的鲁棒性,并将它们与地面真实数据进行了比较。
{"title":"Instant Mixed Reality Lighting from Casual Scanning","authors":"Thomas Richter-Trummer, Denis Kalkofen, Jinwoo Park, D. Schmalstieg","doi":"10.1109/ISMAR.2016.18","DOIUrl":"https://doi.org/10.1109/ISMAR.2016.18","url":null,"abstract":"We present a method for recovering both incident lighting and surface materials from casually scanned geometry. By casual, we mean a rapid and potentially noisy scanning procedure of unmodified and uninstrumented scenes with a commodity RGB-D sensor. In other words, unlike reconstruction procedures which require careful preparations in a laboratory environment, our method works with input that can be obtained by consumer users. To ensure a robust procedure, we segment the reconstructed geometry into surfaces with homogeneous material properties and compute the radiance transfer on these segments. With this input, we solve the inverse rendering problem of factorization into lighting and material properties using an iterative optimization in spherical harmonics form. This allows us to account for self-shadowing and recover specular properties. The resulting data can be used to generate a wide range of mixed reality applications, including the rendering of synthetic objects with matching lighting into a given scene, but also re-rendering the scene (or a part of it) with new lighting. We show the robustness of our approach with real and synthetic examples under a variety of lighting conditions and compare them with ground truth data.","PeriodicalId":146808,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122496215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
A Single Camera Image Based Approach for Glossy Reflections in Mixed Reality Applications 混合现实应用中基于单相机图像的光滑反射方法
Pub Date : 2016-09-01 DOI: 10.1109/ISMAR.2016.12
Tobias Schwandt, W. Broll
Proper scene inference provides the basis for a seamless integration of virtual objects into the real environment. While widely neglected in many AR/MR environments, previous approaches providing good results were based on rather complex setups, often involving mirrored balls, several HDR cameras, and fish eye lenses to achieve proper light probes. In this paper we present an approach requiring a single RGB-D camera image only for generating glossy reflections on virtual objects. Our approach is based on a partial 3D reconstruction of the real environment combined with a screen-space ray-tracing mechanism. We show that our approach allows for convincing reflections of the real environment as well as mutual reflections between virtual objects of an MR environment.
适当的场景推理为虚拟对象与真实环境的无缝集成提供了基础。虽然在许多AR/MR环境中被广泛忽视,但以前提供良好结果的方法是基于相当复杂的设置,通常涉及镜像球,几个HDR相机和鱼眼镜头来实现适当的光探针。在本文中,我们提出了一种方法,只需要一个RGB-D相机图像来生成虚拟物体上的光滑反射。我们的方法是基于真实环境的部分3D重建,结合屏幕空间光线追踪机制。我们表明,我们的方法允许真实环境的令人信服的反射以及MR环境的虚拟对象之间的相互反射。
{"title":"A Single Camera Image Based Approach for Glossy Reflections in Mixed Reality Applications","authors":"Tobias Schwandt, W. Broll","doi":"10.1109/ISMAR.2016.12","DOIUrl":"https://doi.org/10.1109/ISMAR.2016.12","url":null,"abstract":"Proper scene inference provides the basis for a seamless integration of virtual objects into the real environment. While widely neglected in many AR/MR environments, previous approaches providing good results were based on rather complex setups, often involving mirrored balls, several HDR cameras, and fish eye lenses to achieve proper light probes. In this paper we present an approach requiring a single RGB-D camera image only for generating glossy reflections on virtual objects. Our approach is based on a partial 3D reconstruction of the real environment combined with a screen-space ray-tracing mechanism. We show that our approach allows for convincing reflections of the real environment as well as mutual reflections between virtual objects of an MR environment.","PeriodicalId":146808,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132067774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Leveraging the User's Face for Absolute Scale Estimation in Handheld Monocular SLAM 手持式单目SLAM中利用用户面部进行绝对尺度估计
Pub Date : 2016-09-01 DOI: 10.1109/ISMAR.2016.20
S. Knorr, Daniel Kurz
We present an approach to estimate absolute scale in handheld monocular SLAM by simultaneously tracking the user's face with a user-facing camera while a world-facing camera captures the scene for localization and mapping. Given face tracking at absolute scale, two images of a face taken from two different viewpoints enable estimating the translational distance between the two viewpoints in absolute units, such as millimeters. Under the assumption that the face itself stayed stationary in the scene while taking the two images, the motion of the user-facing camera relative to the face can be transferred to the motion of the rigidly connected world-facing camera relative to the scene. This allows determining also the latter motion in absolute units and enables reconstructing and tracking the scene at absolute scale.As faces of different adult humans differ only moderately in terms of size, it is possible to rely on statistics for guessing the absolute dimensions of a face. For improved accuracy the dimensions of the particular face of the user can be calibrated.Based on sequences of world-facing and user-facing images captured by a mobile phone, we show for different scenes how our approach enables reconstruction and tracking at absolute scale using a proof-of-concept implementation. Quantitative evaluations against ground truth data confirm that our approach provides absolute scale at an accuracy well suited for different applications. Particularly, we show how our method enables various use cases in handheld Augmented Reality applications that superimpose virtual objects at absolute scale or feature interactive distance measurements.
我们提出了一种估算手持式单目SLAM中绝对尺度的方法,该方法使用面向用户的相机同时跟踪用户的面部,同时面向世界的相机捕获场景进行定位和映射。给定绝对尺度的人脸跟踪,从两个不同视点拍摄的两张人脸图像可以以绝对单位(如毫米)估计两个视点之间的平移距离。在假设人脸本身在拍摄两幅图像时在场景中保持静止的情况下,面向用户的相机相对于人脸的运动可以转化为刚性连接的面向世界的相机相对于场景的运动。这也允许以绝对单位确定后一种运动,并允许在绝对尺度上重建和跟踪场景。由于不同成年人的脸在大小上只有适度的差异,因此可以依靠统计数据来猜测一张脸的绝对尺寸。为了提高精度,可以对用户特定面部的尺寸进行校准。基于手机捕获的面向世界和面向用户的图像序列,我们展示了不同场景下我们的方法如何使用概念验证实现在绝对规模上实现重建和跟踪。对地面真实数据的定量评估证实,我们的方法提供了绝对规模的精度,非常适合于不同的应用。特别是,我们展示了我们的方法如何在手持式增强现实应用程序中实现各种用例,这些应用程序以绝对规模叠加虚拟对象或具有交互式距离测量功能。
{"title":"Leveraging the User's Face for Absolute Scale Estimation in Handheld Monocular SLAM","authors":"S. Knorr, Daniel Kurz","doi":"10.1109/ISMAR.2016.20","DOIUrl":"https://doi.org/10.1109/ISMAR.2016.20","url":null,"abstract":"We present an approach to estimate absolute scale in handheld monocular SLAM by simultaneously tracking the user's face with a user-facing camera while a world-facing camera captures the scene for localization and mapping. Given face tracking at absolute scale, two images of a face taken from two different viewpoints enable estimating the translational distance between the two viewpoints in absolute units, such as millimeters. Under the assumption that the face itself stayed stationary in the scene while taking the two images, the motion of the user-facing camera relative to the face can be transferred to the motion of the rigidly connected world-facing camera relative to the scene. This allows determining also the latter motion in absolute units and enables reconstructing and tracking the scene at absolute scale.As faces of different adult humans differ only moderately in terms of size, it is possible to rely on statistics for guessing the absolute dimensions of a face. For improved accuracy the dimensions of the particular face of the user can be calibrated.Based on sequences of world-facing and user-facing images captured by a mobile phone, we show for different scenes how our approach enables reconstruction and tracking at absolute scale using a proof-of-concept implementation. Quantitative evaluations against ground truth data confirm that our approach provides absolute scale at an accuracy well suited for different applications. Particularly, we show how our method enables various use cases in handheld Augmented Reality applications that superimpose virtual objects at absolute scale or feature interactive distance measurements.","PeriodicalId":146808,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126838145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
TactileVR: Integrating Physical Toys into Learn and Play Virtual Reality Experiences 将实体玩具整合到学习和玩虚拟现实体验中
Pub Date : 2016-05-07 DOI: 10.1109/ISMAR.2016.25
Lior Shapira, J. Amores, X. Benavides
We present TactileVR, a proof-of-concept virtual reality system in which a user is free to move around and interact with physical objects and toys, which are represented in the virtual world. By integrating tracking information from the head, hands and feet of the user, as well as the objects, we infer complex gestures and interactions such as shaking a toy, rotating a steering wheel, or clapping your hands. We create educational and recreational experiences for kids, which promote exploration and discovery, while feeling intuitive and safe. In each experience objects have a unique appearance and behavior e.g. in an electric circuits lab toy blocks serve as switches, batteries and light bulbs.We conducted a user study with children ages 5-11, who experienced TactileVR and interacted with virtual proxies of physical objects. Children took instantly to the TactileVR environment, intuitively discovered a variety of interactions, and completed tasks faster than with non-tactile virtual objects. Moreover, the presence of physical toys created the opportunity for collaborative play, even when only some of the kids were using a VR headset.
我们介绍了一种概念验证型虚拟现实系统——触觉levr,在该系统中,用户可以自由移动,并与虚拟世界中的物理对象和玩具进行交互。通过整合用户的头、手、脚以及物体的跟踪信息,我们推断出复杂的手势和互动,如摇动玩具、旋转方向盘或拍手。我们为孩子们创造教育和娱乐体验,促进探索和发现,同时感觉直观和安全。在每个体验中,对象都有独特的外观和行为,例如在电路实验室中,玩具块充当开关、电池和灯泡。我们对5-11岁的儿童进行了一项用户研究,他们体验了触觉手柄,并与物理对象的虚拟代理进行了互动。孩子们立即适应了这个环境,直观地发现了各种各样的互动,完成任务的速度比使用非触觉虚拟物体更快。此外,实体玩具的存在为合作游戏创造了机会,即使只有一些孩子使用VR耳机。
{"title":"TactileVR: Integrating Physical Toys into Learn and Play Virtual Reality Experiences","authors":"Lior Shapira, J. Amores, X. Benavides","doi":"10.1109/ISMAR.2016.25","DOIUrl":"https://doi.org/10.1109/ISMAR.2016.25","url":null,"abstract":"We present TactileVR, a proof-of-concept virtual reality system in which a user is free to move around and interact with physical objects and toys, which are represented in the virtual world. By integrating tracking information from the head, hands and feet of the user, as well as the objects, we infer complex gestures and interactions such as shaking a toy, rotating a steering wheel, or clapping your hands. We create educational and recreational experiences for kids, which promote exploration and discovery, while feeling intuitive and safe. In each experience objects have a unique appearance and behavior e.g. in an electric circuits lab toy blocks serve as switches, batteries and light bulbs.We conducted a user study with children ages 5-11, who experienced TactileVR and interacted with virtual proxies of physical objects. Children took instantly to the TactileVR environment, intuitively discovered a variety of interactions, and completed tasks faster than with non-tactile virtual objects. Moreover, the presence of physical toys created the opportunity for collaborative play, even when only some of the kids were using a VR headset.","PeriodicalId":146808,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133699321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Robust Keyframe-based Monocular SLAM for Augmented Reality 增强现实鲁棒关键帧单目SLAM
Pub Date : 1900-01-01 DOI: 10.1109/ISMAR-Adjunct.2016.0111
Haomin Liu, Guofeng Zhang, H. Bao
Keyframe-based SLAM has achieved great success in terms of accuracy, efficiency and scalability. However, due to parallax requirement and delay of map expansion, traditional keyframe-based methods easily encounter the robustness problem in the challenging cases especially for fast motion with strong rotation. For AR applications in practice, these challenging cases are easily encountered, since a home user may not carefully move the camera to avoid potential problems. With the above motivation, in this paper, we present RKSLAM, a robust keyframe-based monocular SLAM system that can reliably handle fast motion and strong rotation, ensuring good AR experiences. First, we propose a novel multihomography based feature tracking method which is robust and efficient for fast motion and strong rotation. Based on it, we propose a real-time local map expansion scheme to triangulate the observed 3D points immediately without delay. A sliding-window based camera pose optimization framework is proposed, which imposes the motion prior constraints between consecutive frames through simulated or real IMU data. Qualitative and quantitative comparisons with the state-of-the-art methods, and an AR application on mobile devices demonstrate the effectiveness of the proposed approach.
基于关键帧的SLAM在精度、效率和可扩展性方面取得了巨大的成功。然而,由于视差要求和地图扩展的延迟,传统的基于关键帧的方法在具有挑战性的情况下容易遇到鲁棒性问题,特别是对于具有强旋转的快速运动。对于实际的AR应用来说,这些具有挑战性的情况很容易遇到,因为家庭用户可能不小心移动相机以避免潜在的问题。基于上述动机,在本文中,我们提出了RKSLAM,一种鲁棒的基于关键帧的单目SLAM系统,可以可靠地处理快速运动和强旋转,确保良好的AR体验。首先,我们提出了一种新的基于多单应性的特征跟踪方法,该方法对快速运动和强旋转具有鲁棒性和高效率。在此基础上,提出了一种实时局部地图扩展方案,对观测到的三维点进行即时、无延迟的三角剖分。提出了一种基于滑动窗口的相机姿态优化框架,该框架通过模拟或真实IMU数据在连续帧之间施加运动先验约束。与最先进的方法进行定性和定量比较,并在移动设备上进行AR应用,证明了所提出方法的有效性。
{"title":"Robust Keyframe-based Monocular SLAM for Augmented Reality","authors":"Haomin Liu, Guofeng Zhang, H. Bao","doi":"10.1109/ISMAR-Adjunct.2016.0111","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct.2016.0111","url":null,"abstract":"Keyframe-based SLAM has achieved great success in terms of accuracy, efficiency and scalability. However, due to parallax requirement and delay of map expansion, traditional keyframe-based methods easily encounter the robustness problem in the challenging cases especially for fast motion with strong rotation. For AR applications in practice, these challenging cases are easily encountered, since a home user may not carefully move the camera to avoid potential problems. With the above motivation, in this paper, we present RKSLAM, a robust keyframe-based monocular SLAM system that can reliably handle fast motion and strong rotation, ensuring good AR experiences. First, we propose a novel multihomography based feature tracking method which is robust and efficient for fast motion and strong rotation. Based on it, we propose a real-time local map expansion scheme to triangulate the observed 3D points immediately without delay. A sliding-window based camera pose optimization framework is proposed, which imposes the motion prior constraints between consecutive frames through simulated or real IMU data. Qualitative and quantitative comparisons with the state-of-the-art methods, and an AR application on mobile devices demonstrate the effectiveness of the proposed approach.","PeriodicalId":146808,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121563455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
期刊
2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1