首页 > 最新文献

IEEE Virtual Reality 2004最新文献

英文 中文
A simplification architecture for exploring navigation tradeoffs in mobile VR 用于探索移动VR导航权衡的简化架构
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.6
Carlos D. Correa, I. Marsic
Interactive applications on mobile devices often reduce data fidelity to adapt to resource constraints and variable user preferences. In virtual reality applications, the problem of reducing scene graph fidelity can be stated as a combinatorial optimization problem, where a part of the scene graph with maximum fidelity is chosen such that the resources it requires are below a given threshold and the hierarchical relationships are maintained. The problem can be formulated as a variation of the tree knapsack problem, which is known to be NP-hard. For this reason, solutions to this problem result in a tradeoff that affects user navigation. On one hand, exact solutions provide the highest fidelity but may take long time to compute. On the other hand, greedy solutions are fast but lack high fidelity. We present a simplification architecture that allows the exploration of such navigation tradeoffs. This is achieved by a formulating the problem in a generic way and developing software components that allow the dynamic selection of algorithms and constraints. The experimental results show that the architecture is flexible and supports dynamic reconfiguration.
移动设备上的交互式应用程序通常会降低数据保真度,以适应资源限制和多变的用户偏好。在虚拟现实应用中,降低场景图保真度的问题可以描述为一个组合优化问题,即选择具有最大保真度的场景图的一部分,使其所需的资源低于给定的阈值,并保持层次关系。这个问题可以被表述为树背包问题的一个变体,它被认为是np困难的。由于这个原因,这个问题的解决方案会导致影响用户导航的折衷。一方面,精确的解决方案提供了最高的保真度,但可能需要很长时间来计算。另一方面,贪心解速度快,但缺乏高保真度。我们提出了一个简化的架构,允许探索这样的导航权衡。这是通过以通用方式表述问题和开发允许动态选择算法和约束的软件组件来实现的。实验结果表明,该体系结构灵活,支持动态重构。
{"title":"A simplification architecture for exploring navigation tradeoffs in mobile VR","authors":"Carlos D. Correa, I. Marsic","doi":"10.1109/VR.2004.6","DOIUrl":"https://doi.org/10.1109/VR.2004.6","url":null,"abstract":"Interactive applications on mobile devices often reduce data fidelity to adapt to resource constraints and variable user preferences. In virtual reality applications, the problem of reducing scene graph fidelity can be stated as a combinatorial optimization problem, where a part of the scene graph with maximum fidelity is chosen such that the resources it requires are below a given threshold and the hierarchical relationships are maintained. The problem can be formulated as a variation of the tree knapsack problem, which is known to be NP-hard. For this reason, solutions to this problem result in a tradeoff that affects user navigation. On one hand, exact solutions provide the highest fidelity but may take long time to compute. On the other hand, greedy solutions are fast but lack high fidelity. We present a simplification architecture that allows the exploration of such navigation tradeoffs. This is achieved by a formulating the problem in a generic way and developing software components that allow the dynamic selection of algorithms and constraints. The experimental results show that the architecture is flexible and supports dynamic reconfiguration.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115059453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An on-line evaluation system for optical see-through augmented reality 光学透视增强现实在线评估系统
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.11
Nassir Navab, Siavash Zokai, Yakup Genç, E. M. Coelho
This work introduces a technique that allows final users to evaluate and recalibrate their AR system as frequently as needed. We developed an interactive game as a prototype for such evaluation system and explain how this technique can be implemented to be used in real life.
这项工作引入了一种技术,允许最终用户根据需要频繁地评估和重新校准他们的AR系统。我们开发了一个互动游戏作为这种评估系统的原型,并解释了如何将这种技术应用于现实生活中。
{"title":"An on-line evaluation system for optical see-through augmented reality","authors":"Nassir Navab, Siavash Zokai, Yakup Genç, E. M. Coelho","doi":"10.1109/VR.2004.11","DOIUrl":"https://doi.org/10.1109/VR.2004.11","url":null,"abstract":"This work introduces a technique that allows final users to evaluate and recalibrate their AR system as frequently as needed. We developed an interactive game as a prototype for such evaluation system and explain how this technique can be implemented to be used in real life.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117265244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
MVL toolkit: software library for constructing an immersive shared virtual world MVL工具箱:用于构建沉浸式共享虚拟世界的软件库
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.54
T. Ogi, T. Kayahara, T. Yamada, M. Hirose
In this study, we investigated various functions that are required in an immersive shared virtual world, and then developed the MVL toolkit to implement these functions. The MVL toolkit contains several utilities that enable such functions as sharing space, sharing users, sharing operations, sharing information and sharing time. By using the MVL toolkit, collaborative virtual reality applications can be easily constructed by extending existing stand-alone application programs.
在本研究中,我们研究了沉浸式共享虚拟世界中所需的各种功能,然后开发了MVL工具包来实现这些功能。MVL工具包包含几个实用程序,可以实现共享空间、共享用户、共享操作、共享信息和共享时间等功能。通过使用MVL工具包,可以通过扩展现有的独立应用程序轻松构建协作式虚拟现实应用程序。
{"title":"MVL toolkit: software library for constructing an immersive shared virtual world","authors":"T. Ogi, T. Kayahara, T. Yamada, M. Hirose","doi":"10.1109/VR.2004.54","DOIUrl":"https://doi.org/10.1109/VR.2004.54","url":null,"abstract":"In this study, we investigated various functions that are required in an immersive shared virtual world, and then developed the MVL toolkit to implement these functions. The MVL toolkit contains several utilities that enable such functions as sharing space, sharing users, sharing operations, sharing information and sharing time. By using the MVL toolkit, collaborative virtual reality applications can be easily constructed by extending existing stand-alone application programs.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116571529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PRASAD: an augmented reality based non-invasive pre-operative visualization framework for lungs PRASAD:基于增强现实的非侵入性肺术前可视化框架
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.58
A. Santhanam, C. Fidopiastis, B. Hoffman-Ruddy, J. Rolland
This paper presents a preoperative anatomical visualization framework, PRASAD (physically realistic adaptive and scalable anatomical deformation system), which combines a bio-mathematical representation of deformable lungs with real-time deformation and stereoscopic visualization technology. This framework provides a visualization of a dynamic patient-specific deformation of synthetic 3D anatomical models, that physicians can view from different viewpoints in a stereoscopic augmented reality environment for efficient diagnosis.
本文提出了一种术前解剖可视化框架,PRASAD(物理逼真的自适应和可扩展的解剖变形系统),它将可变形肺的生物数学表示与实时变形和立体可视化技术相结合。该框架提供了合成3D解剖模型的动态患者特定变形的可视化,医生可以在立体增强现实环境中从不同的角度进行有效诊断。
{"title":"PRASAD: an augmented reality based non-invasive pre-operative visualization framework for lungs","authors":"A. Santhanam, C. Fidopiastis, B. Hoffman-Ruddy, J. Rolland","doi":"10.1109/VR.2004.58","DOIUrl":"https://doi.org/10.1109/VR.2004.58","url":null,"abstract":"This paper presents a preoperative anatomical visualization framework, PRASAD (physically realistic adaptive and scalable anatomical deformation system), which combines a bio-mathematical representation of deformable lungs with real-time deformation and stereoscopic visualization technology. This framework provides a visualization of a dynamic patient-specific deformation of synthetic 3D anatomical models, that physicians can view from different viewpoints in a stereoscopic augmented reality environment for efficient diagnosis.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125325423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Real world video avatar: transmission and presentation of human figure 真实世界视频化身:人物形象的传递与呈现
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.64
Hiroyuki Maeda, T. Tanikawa, J. Yamashita, K. Hirota, M. Hirose
Video avatar (Ogi et al., 2001) is one methodology of interaction with people at a remote location. By using such video-based real-time human figures, participants can interact using nonverbal information such as gestures and eye contact. In traditional video avatar interaction, however, participants can interact only in "virtual" space. We have proposed the concept of a "real-world video avatar", that is, the concept of video avatar presentation in "real" space. One requirement of such a system is that the presented figure must be viewable from various directions, similarly to a real human. In this paper such a view is called "multiview". By presenting a real-time human figure with "multiview", many participants can interact with the figure from all directions, similarly to interaction in the real world. A system that supports "multiview" was proposed by Endo et al. (2000), however, this system cannot show real-time images. We have developed a display system which supports "multiview" (Maeda et al., 2002). In this paper, we discuss the evaluation of real-time presentation using the display system.
视频化身(Ogi et al., 2001)是一种与远程位置的人进行交互的方法。通过使用这种基于视频的实时人物形象,参与者可以使用手势和眼神交流等非语言信息进行互动。然而,在传统的视频化身互动中,参与者只能在“虚拟”空间中进行互动。我们提出了“真实世界视频化身”的概念,即视频化身在“真实”空间中的呈现概念。这种系统的一个要求是,所呈现的图形必须从不同的方向可见,类似于一个真实的人。本文将这种视图称为“多视图”。通过“多视角”呈现一个实时的人物形象,许多参与者可以从各个方向与人物互动,类似于现实世界中的互动。Endo等人(2000)提出了一种支持“多视图”的系统,但该系统无法显示实时图像。我们已经开发了一个支持“多视图”的显示系统(Maeda et al., 2002)。在本文中,我们讨论了使用显示系统的实时演示的评估。
{"title":"Real world video avatar: transmission and presentation of human figure","authors":"Hiroyuki Maeda, T. Tanikawa, J. Yamashita, K. Hirota, M. Hirose","doi":"10.1109/VR.2004.64","DOIUrl":"https://doi.org/10.1109/VR.2004.64","url":null,"abstract":"Video avatar (Ogi et al., 2001) is one methodology of interaction with people at a remote location. By using such video-based real-time human figures, participants can interact using nonverbal information such as gestures and eye contact. In traditional video avatar interaction, however, participants can interact only in \"virtual\" space. We have proposed the concept of a \"real-world video avatar\", that is, the concept of video avatar presentation in \"real\" space. One requirement of such a system is that the presented figure must be viewable from various directions, similarly to a real human. In this paper such a view is called \"multiview\". By presenting a real-time human figure with \"multiview\", many participants can interact with the figure from all directions, similarly to interaction in the real world. A system that supports \"multiview\" was proposed by Endo et al. (2000), however, this system cannot show real-time images. We have developed a display system which supports \"multiview\" (Maeda et al., 2002). In this paper, we discuss the evaluation of real-time presentation using the display system.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131407151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Projector-based dual-resolution stereoscopic display 基于投影仪的双分辨率立体显示
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.63
G. Godin, Jean-François Lalonde, L. Borgeat
We present a stereoscopic display system which incorporates a high-resolution inset image, or fovea. We describe the specific problem of false depth cues along the boundaries of the inset image, and propose a solution in which the boundaries of the inset image are dynamically adapted as a function of the geometry of the scene. This method produces comfortable stereoscopic viewing at a low additional computational cost. The four projectors need only be approximately aligned: a single drawing pass is required, regardless of projector alignment, since the warping is applied as part of the 3D rendering process.
我们提出了一个立体显示系统,其中包括一个高分辨率的插入图像,或中央凹。我们描述了沿插入图像边界的虚假深度线索的具体问题,并提出了一种解决方案,其中插入图像的边界作为场景几何的函数动态适应。这种方法以较低的额外计算成本产生舒适的立体视觉。这四个投影仪只需要大致对齐:一个单一的绘图通道是必需的,不管投影仪对齐,因为翘曲是应用作为3D渲染过程的一部分。
{"title":"Projector-based dual-resolution stereoscopic display","authors":"G. Godin, Jean-François Lalonde, L. Borgeat","doi":"10.1109/VR.2004.63","DOIUrl":"https://doi.org/10.1109/VR.2004.63","url":null,"abstract":"We present a stereoscopic display system which incorporates a high-resolution inset image, or fovea. We describe the specific problem of false depth cues along the boundaries of the inset image, and propose a solution in which the boundaries of the inset image are dynamically adapted as a function of the geometry of the scene. This method produces comfortable stereoscopic viewing at a low additional computational cost. The four projectors need only be approximately aligned: a single drawing pass is required, regardless of projector alignment, since the warping is applied as part of the 3D rendering process.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126868073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Unified gesture-based interaction techniques for object manipulation and navigation in a large-scale virtual environment 大规模虚拟环境中统一的基于手势的对象操作和导航交互技术
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.81
Yusuke Tomozoe, Takashi Machida, K. Kiyokawa, H. Takemura
Manipulation of virtual objects and navigation are common operations in a large-scale virtual environment. In this paper, we propose a few gesture-based interaction techniques that can be used for both object manipulation and navigation. Unlike existing methods, our techniques enable a user to perform these two types of operations flexibly with a little practice in identical interaction manners by introducing a movability property attached to every virtual object.
虚拟对象的操作和导航是大规模虚拟环境中常见的操作。在本文中,我们提出了一些基于手势的交互技术,可用于对象操作和导航。与现有的方法不同,我们的技术使用户能够灵活地执行这两种类型的操作,通过引入附加在每个虚拟对象上的可移动属性,以相同的交互方式进行少量练习。
{"title":"Unified gesture-based interaction techniques for object manipulation and navigation in a large-scale virtual environment","authors":"Yusuke Tomozoe, Takashi Machida, K. Kiyokawa, H. Takemura","doi":"10.1109/VR.2004.81","DOIUrl":"https://doi.org/10.1109/VR.2004.81","url":null,"abstract":"Manipulation of virtual objects and navigation are common operations in a large-scale virtual environment. In this paper, we propose a few gesture-based interaction techniques that can be used for both object manipulation and navigation. Unlike existing methods, our techniques enable a user to perform these two types of operations flexibly with a little practice in identical interaction manners by introducing a movability property attached to every virtual object.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115173026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Focus measurement on programmable graphics hardware for all in-focus rendering from light fields 在可编程图形硬件上对所有来自光场的聚焦渲染进行焦点测量
Pub Date : 2003-09-29 DOI: 10.1109/VR.2004.39
Kaoru Sugita, Keita Takahashi, T. Naemura, H. Harashima
This paper deals with a method for interactive rendering of photorealistic images, which is a fundamental technology in the field of virtual reality. Since the latest graphics processing units (GPUs) are programmable, they are expected to be useful for various applications including numerical computation and image processing. This paper proposes a method for focus measurement on light field rendering using a GPU as a fast processing unit for image processing and image-based rendering. It is confirmed that the proposed method enables interactive all in-focus rendering from light fields. This is because the latest DirectX 9 generation GPUs are much faster than CPUs in solving optimization problems, and a GPU implementation can eliminate the latency for data transmission between video memory and system memory. Experimental results show that the GPU implementation outperforms its CPU implementation.
本文研究了一种逼真图像的交互绘制方法,这是虚拟现实领域的一项基础技术。由于最新的图形处理单元(gpu)是可编程的,它们有望用于各种应用,包括数值计算和图像处理。本文提出了一种利用GPU作为图像处理和基于图像绘制的快速处理单元进行光场绘制的焦点测量方法。结果表明,该方法能够实现光场的全聚焦交互式绘制。这是因为最新的DirectX 9代GPU在解决优化问题时比cpu快得多,并且GPU的实现可以消除显存和系统内存之间数据传输的延迟。实验结果表明,GPU实现优于CPU实现。
{"title":"Focus measurement on programmable graphics hardware for all in-focus rendering from light fields","authors":"Kaoru Sugita, Keita Takahashi, T. Naemura, H. Harashima","doi":"10.1109/VR.2004.39","DOIUrl":"https://doi.org/10.1109/VR.2004.39","url":null,"abstract":"This paper deals with a method for interactive rendering of photorealistic images, which is a fundamental technology in the field of virtual reality. Since the latest graphics processing units (GPUs) are programmable, they are expected to be useful for various applications including numerical computation and image processing. This paper proposes a method for focus measurement on light field rendering using a GPU as a fast processing unit for image processing and image-based rendering. It is confirmed that the proposed method enables interactive all in-focus rendering from light fields. This is because the latest DirectX 9 generation GPUs are much faster than CPUs in solving optimization problems, and a GPU implementation can eliminate the latency for data transmission between video memory and system memory. Experimental results show that the GPU implementation outperforms its CPU implementation.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129400650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
IEEE Virtual Reality 2004
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1