首页 > 最新文献

IEEE Virtual Reality 2004最新文献

英文 中文
A simplification architecture for exploring navigation tradeoffs in mobile VR 用于探索移动VR导航权衡的简化架构
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.6
Carlos D. Correa, I. Marsic
Interactive applications on mobile devices often reduce data fidelity to adapt to resource constraints and variable user preferences. In virtual reality applications, the problem of reducing scene graph fidelity can be stated as a combinatorial optimization problem, where a part of the scene graph with maximum fidelity is chosen such that the resources it requires are below a given threshold and the hierarchical relationships are maintained. The problem can be formulated as a variation of the tree knapsack problem, which is known to be NP-hard. For this reason, solutions to this problem result in a tradeoff that affects user navigation. On one hand, exact solutions provide the highest fidelity but may take long time to compute. On the other hand, greedy solutions are fast but lack high fidelity. We present a simplification architecture that allows the exploration of such navigation tradeoffs. This is achieved by a formulating the problem in a generic way and developing software components that allow the dynamic selection of algorithms and constraints. The experimental results show that the architecture is flexible and supports dynamic reconfiguration.
移动设备上的交互式应用程序通常会降低数据保真度,以适应资源限制和多变的用户偏好。在虚拟现实应用中,降低场景图保真度的问题可以描述为一个组合优化问题,即选择具有最大保真度的场景图的一部分,使其所需的资源低于给定的阈值,并保持层次关系。这个问题可以被表述为树背包问题的一个变体,它被认为是np困难的。由于这个原因,这个问题的解决方案会导致影响用户导航的折衷。一方面,精确的解决方案提供了最高的保真度,但可能需要很长时间来计算。另一方面,贪心解速度快,但缺乏高保真度。我们提出了一个简化的架构,允许探索这样的导航权衡。这是通过以通用方式表述问题和开发允许动态选择算法和约束的软件组件来实现的。实验结果表明,该体系结构灵活,支持动态重构。
{"title":"A simplification architecture for exploring navigation tradeoffs in mobile VR","authors":"Carlos D. Correa, I. Marsic","doi":"10.1109/VR.2004.6","DOIUrl":"https://doi.org/10.1109/VR.2004.6","url":null,"abstract":"Interactive applications on mobile devices often reduce data fidelity to adapt to resource constraints and variable user preferences. In virtual reality applications, the problem of reducing scene graph fidelity can be stated as a combinatorial optimization problem, where a part of the scene graph with maximum fidelity is chosen such that the resources it requires are below a given threshold and the hierarchical relationships are maintained. The problem can be formulated as a variation of the tree knapsack problem, which is known to be NP-hard. For this reason, solutions to this problem result in a tradeoff that affects user navigation. On one hand, exact solutions provide the highest fidelity but may take long time to compute. On the other hand, greedy solutions are fast but lack high fidelity. We present a simplification architecture that allows the exploration of such navigation tradeoffs. This is achieved by a formulating the problem in a generic way and developing software components that allow the dynamic selection of algorithms and constraints. The experimental results show that the architecture is flexible and supports dynamic reconfiguration.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115059453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
MVL toolkit: software library for constructing an immersive shared virtual world MVL工具箱:用于构建沉浸式共享虚拟世界的软件库
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.54
T. Ogi, T. Kayahara, T. Yamada, M. Hirose
In this study, we investigated various functions that are required in an immersive shared virtual world, and then developed the MVL toolkit to implement these functions. The MVL toolkit contains several utilities that enable such functions as sharing space, sharing users, sharing operations, sharing information and sharing time. By using the MVL toolkit, collaborative virtual reality applications can be easily constructed by extending existing stand-alone application programs.
在本研究中,我们研究了沉浸式共享虚拟世界中所需的各种功能,然后开发了MVL工具包来实现这些功能。MVL工具包包含几个实用程序,可以实现共享空间、共享用户、共享操作、共享信息和共享时间等功能。通过使用MVL工具包,可以通过扩展现有的独立应用程序轻松构建协作式虚拟现实应用程序。
{"title":"MVL toolkit: software library for constructing an immersive shared virtual world","authors":"T. Ogi, T. Kayahara, T. Yamada, M. Hirose","doi":"10.1109/VR.2004.54","DOIUrl":"https://doi.org/10.1109/VR.2004.54","url":null,"abstract":"In this study, we investigated various functions that are required in an immersive shared virtual world, and then developed the MVL toolkit to implement these functions. The MVL toolkit contains several utilities that enable such functions as sharing space, sharing users, sharing operations, sharing information and sharing time. By using the MVL toolkit, collaborative virtual reality applications can be easily constructed by extending existing stand-alone application programs.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116571529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real world video avatar: transmission and presentation of human figure 真实世界视频化身:人物形象的传递与呈现
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.64
Hiroyuki Maeda, T. Tanikawa, J. Yamashita, K. Hirota, M. Hirose
Video avatar (Ogi et al., 2001) is one methodology of interaction with people at a remote location. By using such video-based real-time human figures, participants can interact using nonverbal information such as gestures and eye contact. In traditional video avatar interaction, however, participants can interact only in "virtual" space. We have proposed the concept of a "real-world video avatar", that is, the concept of video avatar presentation in "real" space. One requirement of such a system is that the presented figure must be viewable from various directions, similarly to a real human. In this paper such a view is called "multiview". By presenting a real-time human figure with "multiview", many participants can interact with the figure from all directions, similarly to interaction in the real world. A system that supports "multiview" was proposed by Endo et al. (2000), however, this system cannot show real-time images. We have developed a display system which supports "multiview" (Maeda et al., 2002). In this paper, we discuss the evaluation of real-time presentation using the display system.
视频化身(Ogi et al., 2001)是一种与远程位置的人进行交互的方法。通过使用这种基于视频的实时人物形象,参与者可以使用手势和眼神交流等非语言信息进行互动。然而,在传统的视频化身互动中,参与者只能在“虚拟”空间中进行互动。我们提出了“真实世界视频化身”的概念,即视频化身在“真实”空间中的呈现概念。这种系统的一个要求是,所呈现的图形必须从不同的方向可见,类似于一个真实的人。本文将这种视图称为“多视图”。通过“多视角”呈现一个实时的人物形象,许多参与者可以从各个方向与人物互动,类似于现实世界中的互动。Endo等人(2000)提出了一种支持“多视图”的系统,但该系统无法显示实时图像。我们已经开发了一个支持“多视图”的显示系统(Maeda et al., 2002)。在本文中,我们讨论了使用显示系统的实时演示的评估。
{"title":"Real world video avatar: transmission and presentation of human figure","authors":"Hiroyuki Maeda, T. Tanikawa, J. Yamashita, K. Hirota, M. Hirose","doi":"10.1109/VR.2004.64","DOIUrl":"https://doi.org/10.1109/VR.2004.64","url":null,"abstract":"Video avatar (Ogi et al., 2001) is one methodology of interaction with people at a remote location. By using such video-based real-time human figures, participants can interact using nonverbal information such as gestures and eye contact. In traditional video avatar interaction, however, participants can interact only in \"virtual\" space. We have proposed the concept of a \"real-world video avatar\", that is, the concept of video avatar presentation in \"real\" space. One requirement of such a system is that the presented figure must be viewable from various directions, similarly to a real human. In this paper such a view is called \"multiview\". By presenting a real-time human figure with \"multiview\", many participants can interact with the figure from all directions, similarly to interaction in the real world. A system that supports \"multiview\" was proposed by Endo et al. (2000), however, this system cannot show real-time images. We have developed a display system which supports \"multiview\" (Maeda et al., 2002). In this paper, we discuss the evaluation of real-time presentation using the display system.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131407151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Tracker calibration using tetrahedral mesh and tricubic spline models of warp 跟踪器标定采用四面体网格和三次样条模型的经纱
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.79
C. Borst
This paper presents a three-level tracker calibration system that greatly reduces errors in tracked position and orientation. The first level computes an error-minimizing rigid body transform that eliminates the need for precise alignment of a tracker base frame. The second corrects for field warp by interpolating correction values stored with vertices in a tetrahedrization of warped space. The third performs an alternative field warp calibration by interpolating corrections in the parameter space of a tricubic spline model of field warp. The system is evaluated for field warp calibration near a passive-haptic panel in both low-warp and high-warp environments. The spline method produces the most accurate results, reducing median position error by over 90% and median orientation error by over 80% when compared to the use of only a rigid body transform.
本文提出了一种三级跟踪器标定系统,大大降低了跟踪位置和方向误差。第一级计算误差最小的刚体变换,消除了跟踪器基架精确对准的需要。第二种方法是通过插值存储在扭曲空间四面体中的顶点的校正值来校正场扭曲。第三种方法通过在三次样条模型的参数空间内插值修正来进行备选场翘曲校准。在低曲度和高曲度环境下,该系统在被动触觉面板附近进行了现场曲度校准评估。与仅使用刚体变换相比,样条方法产生最准确的结果,将中位位置误差减少90%以上,中位方向误差减少80%以上。
{"title":"Tracker calibration using tetrahedral mesh and tricubic spline models of warp","authors":"C. Borst","doi":"10.1109/VR.2004.79","DOIUrl":"https://doi.org/10.1109/VR.2004.79","url":null,"abstract":"This paper presents a three-level tracker calibration system that greatly reduces errors in tracked position and orientation. The first level computes an error-minimizing rigid body transform that eliminates the need for precise alignment of a tracker base frame. The second corrects for field warp by interpolating correction values stored with vertices in a tetrahedrization of warped space. The third performs an alternative field warp calibration by interpolating corrections in the parameter space of a tricubic spline model of field warp. The system is evaluated for field warp calibration near a passive-haptic panel in both low-warp and high-warp environments. The spline method produces the most accurate results, reducing median position error by over 90% and median orientation error by over 80% when compared to the use of only a rigid body transform.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130437856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Navigation with place representations and visible landmarks 导航与位置表示和可见的地标
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.55
Jeffrey S. Pierce, R. Pausch
Existing navigation techniques do not scale well to large virtual worlds. We present a new technique, navigation with place representations and visible landmarks that scales from town-sized to planet-sized worlds. Visible landmarks make distant landmarks visible and allow users to travel relative to those landmarks with a single gesture. Actual and symbolic place representations allow users to detect and travel to more distant locations with a small number of gestures. The world's semantic place hierarchy determines which visible landmarks and place representations users can see at any point in time. We present experimental results demonstrating that our technique allows users to navigate more efficiently than a modified panning and zooming W1M, completing within-place navigation tasks 22% faster and between-place tasks 38% faster on average.
现有的导航技术不能很好地适应大型虚拟世界。我们提出了一种新的技术,导航的地方表示和可见的地标,从城镇大小到行星大小的世界。可见地标使远处的地标可见,并允许用户通过一个手势相对于这些地标旅行。实际和象征性的地点表示允许用户通过少量手势来检测和旅行到更远的地方。世界的语义位置层次结构决定了用户在任何时间点都可以看到哪些可见的地标和位置表示。我们展示的实验结果表明,我们的技术允许用户比修改后的平移和缩放W1M更有效地导航,完成位置内导航任务的平均速度提高22%,完成位置间导航任务的平均速度提高38%。
{"title":"Navigation with place representations and visible landmarks","authors":"Jeffrey S. Pierce, R. Pausch","doi":"10.1109/VR.2004.55","DOIUrl":"https://doi.org/10.1109/VR.2004.55","url":null,"abstract":"Existing navigation techniques do not scale well to large virtual worlds. We present a new technique, navigation with place representations and visible landmarks that scales from town-sized to planet-sized worlds. Visible landmarks make distant landmarks visible and allow users to travel relative to those landmarks with a single gesture. Actual and symbolic place representations allow users to detect and travel to more distant locations with a small number of gestures. The world's semantic place hierarchy determines which visible landmarks and place representations users can see at any point in time. We present experimental results demonstrating that our technique allows users to navigate more efficiently than a modified panning and zooming W1M, completing within-place navigation tasks 22% faster and between-place tasks 38% faster on average.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130482339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 59
Projector-based dual-resolution stereoscopic display 基于投影仪的双分辨率立体显示
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.63
G. Godin, Jean-François Lalonde, L. Borgeat
We present a stereoscopic display system which incorporates a high-resolution inset image, or fovea. We describe the specific problem of false depth cues along the boundaries of the inset image, and propose a solution in which the boundaries of the inset image are dynamically adapted as a function of the geometry of the scene. This method produces comfortable stereoscopic viewing at a low additional computational cost. The four projectors need only be approximately aligned: a single drawing pass is required, regardless of projector alignment, since the warping is applied as part of the 3D rendering process.
我们提出了一个立体显示系统,其中包括一个高分辨率的插入图像,或中央凹。我们描述了沿插入图像边界的虚假深度线索的具体问题,并提出了一种解决方案,其中插入图像的边界作为场景几何的函数动态适应。这种方法以较低的额外计算成本产生舒适的立体视觉。这四个投影仪只需要大致对齐:一个单一的绘图通道是必需的,不管投影仪对齐,因为翘曲是应用作为3D渲染过程的一部分。
{"title":"Projector-based dual-resolution stereoscopic display","authors":"G. Godin, Jean-François Lalonde, L. Borgeat","doi":"10.1109/VR.2004.63","DOIUrl":"https://doi.org/10.1109/VR.2004.63","url":null,"abstract":"We present a stereoscopic display system which incorporates a high-resolution inset image, or fovea. We describe the specific problem of false depth cues along the boundaries of the inset image, and propose a solution in which the boundaries of the inset image are dynamically adapted as a function of the geometry of the scene. This method produces comfortable stereoscopic viewing at a low additional computational cost. The four projectors need only be approximately aligned: a single drawing pass is required, regardless of projector alignment, since the warping is applied as part of the 3D rendering process.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126868073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Unified gesture-based interaction techniques for object manipulation and navigation in a large-scale virtual environment 大规模虚拟环境中统一的基于手势的对象操作和导航交互技术
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.81
Yusuke Tomozoe, Takashi Machida, K. Kiyokawa, H. Takemura
Manipulation of virtual objects and navigation are common operations in a large-scale virtual environment. In this paper, we propose a few gesture-based interaction techniques that can be used for both object manipulation and navigation. Unlike existing methods, our techniques enable a user to perform these two types of operations flexibly with a little practice in identical interaction manners by introducing a movability property attached to every virtual object.
虚拟对象的操作和导航是大规模虚拟环境中常见的操作。在本文中,我们提出了一些基于手势的交互技术,可用于对象操作和导航。与现有的方法不同,我们的技术使用户能够灵活地执行这两种类型的操作,通过引入附加在每个虚拟对象上的可移动属性,以相同的交互方式进行少量练习。
{"title":"Unified gesture-based interaction techniques for object manipulation and navigation in a large-scale virtual environment","authors":"Yusuke Tomozoe, Takashi Machida, K. Kiyokawa, H. Takemura","doi":"10.1109/VR.2004.81","DOIUrl":"https://doi.org/10.1109/VR.2004.81","url":null,"abstract":"Manipulation of virtual objects and navigation are common operations in a large-scale virtual environment. In this paper, we propose a few gesture-based interaction techniques that can be used for both object manipulation and navigation. Unlike existing methods, our techniques enable a user to perform these two types of operations flexibly with a little practice in identical interaction manners by introducing a movability property attached to every virtual object.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115173026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Focus measurement on programmable graphics hardware for all in-focus rendering from light fields 在可编程图形硬件上对所有来自光场的聚焦渲染进行焦点测量
Pub Date : 2003-09-29 DOI: 10.1109/VR.2004.39
Kaoru Sugita, Keita Takahashi, T. Naemura, H. Harashima
This paper deals with a method for interactive rendering of photorealistic images, which is a fundamental technology in the field of virtual reality. Since the latest graphics processing units (GPUs) are programmable, they are expected to be useful for various applications including numerical computation and image processing. This paper proposes a method for focus measurement on light field rendering using a GPU as a fast processing unit for image processing and image-based rendering. It is confirmed that the proposed method enables interactive all in-focus rendering from light fields. This is because the latest DirectX 9 generation GPUs are much faster than CPUs in solving optimization problems, and a GPU implementation can eliminate the latency for data transmission between video memory and system memory. Experimental results show that the GPU implementation outperforms its CPU implementation.
本文研究了一种逼真图像的交互绘制方法,这是虚拟现实领域的一项基础技术。由于最新的图形处理单元(gpu)是可编程的,它们有望用于各种应用,包括数值计算和图像处理。本文提出了一种利用GPU作为图像处理和基于图像绘制的快速处理单元进行光场绘制的焦点测量方法。结果表明,该方法能够实现光场的全聚焦交互式绘制。这是因为最新的DirectX 9代GPU在解决优化问题时比cpu快得多,并且GPU的实现可以消除显存和系统内存之间数据传输的延迟。实验结果表明,GPU实现优于CPU实现。
{"title":"Focus measurement on programmable graphics hardware for all in-focus rendering from light fields","authors":"Kaoru Sugita, Keita Takahashi, T. Naemura, H. Harashima","doi":"10.1109/VR.2004.39","DOIUrl":"https://doi.org/10.1109/VR.2004.39","url":null,"abstract":"This paper deals with a method for interactive rendering of photorealistic images, which is a fundamental technology in the field of virtual reality. Since the latest graphics processing units (GPUs) are programmable, they are expected to be useful for various applications including numerical computation and image processing. This paper proposes a method for focus measurement on light field rendering using a GPU as a fast processing unit for image processing and image-based rendering. It is confirmed that the proposed method enables interactive all in-focus rendering from light fields. This is because the latest DirectX 9 generation GPUs are much faster than CPUs in solving optimization problems, and a GPU implementation can eliminate the latency for data transmission between video memory and system memory. Experimental results show that the GPU implementation outperforms its CPU implementation.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129400650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
IEEE Virtual Reality 2004
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1