首页 > 最新文献

Proceedings of the 2nd ACM symposium on Spatial user interaction最新文献

英文 中文
Combining multi-touch input and device movement for 3D manipulations in mobile augmented reality environments 在移动增强现实环境中结合多点触摸输入和设备移动的3D操作
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659775
A. Pérez, Benoît Bossavit, M. Hachet
Nowadays, handheld devices are capable of displaying augmented environments in which virtual content overlaps reality. To interact with these environments it is necessary to use a manipulation technique. The objective of a manipulation technique is to define how the input data modify the properties of the virtual objects. Current devices have multi-touch screens that can serve as input. Additionally, the position and rotation of the device can also be used as input creating both an opportunity and a design challenge. In this paper we compared three manipulation techniques which namely employ multi-touch, device position and a combination of both. A user evaluation on a docking task revealed that combining multi-touch and device movement yields the best task completion time and efficiency. Nevertheless, using only the device movement and orientation is more intuitive and performs worse only in large rotations.
如今,手持设备能够显示虚拟内容与现实重叠的增强环境。为了与这些环境进行交互,有必要使用操作技术。操作技术的目标是定义输入数据如何修改虚拟对象的属性。目前的设备都有可以作为输入的多点触摸屏。此外,设备的位置和旋转也可以作为输入,这既是一个机会,也是一个设计挑战。在本文中,我们比较了三种操作技术,即采用多点触摸,设备位置和两者的结合。一项用户对对接任务的评估显示,多点触控和设备移动相结合可以获得最佳的任务完成时间和效率。然而,只使用设备的移动和方向更直观,只有在大旋转时表现更差。
{"title":"Combining multi-touch input and device movement for 3D manipulations in mobile augmented reality environments","authors":"A. Pérez, Benoît Bossavit, M. Hachet","doi":"10.1145/2659766.2659775","DOIUrl":"https://doi.org/10.1145/2659766.2659775","url":null,"abstract":"Nowadays, handheld devices are capable of displaying augmented environments in which virtual content overlaps reality. To interact with these environments it is necessary to use a manipulation technique. The objective of a manipulation technique is to define how the input data modify the properties of the virtual objects. Current devices have multi-touch screens that can serve as input. Additionally, the position and rotation of the device can also be used as input creating both an opportunity and a design challenge. In this paper we compared three manipulation techniques which namely employ multi-touch, device position and a combination of both. A user evaluation on a docking task revealed that combining multi-touch and device movement yields the best task completion time and efficiency. Nevertheless, using only the device movement and orientation is more intuitive and performs worse only in large rotations.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128189013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
Session details: Spatial pointing and touching 会话细节:空间指向和触摸
Pub Date : 2014-10-04 DOI: 10.1145/3247435
M. Hachet
{"title":"Session details: Spatial pointing and touching","authors":"M. Hachet","doi":"10.1145/3247435","DOIUrl":"https://doi.org/10.1145/3247435","url":null,"abstract":"","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133596984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The coming age of computer graphics and the evolution of language 即将到来的计算机图形学时代和语言的演变
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661116
K. Perlin
Sometime in the coming years -- whether through ubiquitous projection, AR glasses, smart contact lenses, retinal implants or some technology as yet unknown -- we will live in an eccescopic world, where everything we see around us will be augmented by computer graphics, including our own appearance. In a sense, we are just now starting to enter the Age of Computer Graphics. As children are born into this brave new world, what will their experience be? Face to face communication, both in-person and over great distances, will become visually enhanced, and any tangible object can become an interface to digital information [1]. Hand gestures will be able to produce visual artifacts. After these things come to pass, how will future generations of children evolve natural language itself [2]? How might they think and speak differently about the world around them? What will life in such a world be like for those who are native born to it? We will present some possibilities, and some suggestions for empirical ways to explore those possibilities now -- without needing to wait for those smart contact lenses
在未来几年的某个时候——无论是通过无处不在的投影、增强现实眼镜、智能隐形眼镜、视网膜植入物还是一些尚不为人知的技术——我们将生活在一个异域世界中,我们所看到的一切都将通过计算机图形来增强,包括我们自己的外表。从某种意义上说,我们刚刚开始进入计算机图形学时代。当孩子们出生在这个美丽的新世界时,他们的经历会是什么?面对面的交流,无论是面对面的还是远距离的,都将在视觉上得到增强,任何有形的物体都可以成为数字信息的接口[1]。手势将能够产生视觉上的人工制品。在这些事情发生之后,后代的孩子将如何进化自然语言本身[2]?他们对周围世界的思考和说话会有什么不同?对于那些土生土长的人来说,在这样一个世界里的生活会是什么样子呢?我们将提出一些可能性,以及一些建议,以实证的方式探索这些可能性,而不需要等待智能隐形眼镜的出现
{"title":"The coming age of computer graphics and the evolution of language","authors":"K. Perlin","doi":"10.1145/2659766.2661116","DOIUrl":"https://doi.org/10.1145/2659766.2661116","url":null,"abstract":"Sometime in the coming years -- whether through ubiquitous projection, AR glasses, smart contact lenses, retinal implants or some technology as yet unknown -- we will live in an eccescopic world, where everything we see around us will be augmented by computer graphics, including our own appearance. In a sense, we are just now starting to enter the Age of Computer Graphics. As children are born into this brave new world, what will their experience be? Face to face communication, both in-person and over great distances, will become visually enhanced, and any tangible object can become an interface to digital information [1]. Hand gestures will be able to produce visual artifacts. After these things come to pass, how will future generations of children evolve natural language itself [2]? How might they think and speak differently about the world around them? What will life in such a world be like for those who are native born to it? We will present some possibilities, and some suggestions for empirical ways to explore those possibilities now -- without needing to wait for those smart contact lenses","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133686650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A raycast approach to hybrid touch / motion capturevirtual reality user experience 混合触摸/动作捕捉的光线投射方法和虚拟现实用户体验
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661226
Ryan P. Spicer, Rhys Yahata, Evan A. Suma, M. Bolas
We present a novel approach to integrating a touch screen device into the experience of a user wearing a Head Mounted Display (HMD) in an immersive virtual reality (VR) environment with tracked head and hands.
我们提出了一种新颖的方法,将触摸屏设备集成到在沉浸式虚拟现实(VR)环境中佩戴头戴式显示器(HMD)的用户的体验中,并跟踪头部和手。
{"title":"A raycast approach to hybrid touch / motion capturevirtual reality user experience","authors":"Ryan P. Spicer, Rhys Yahata, Evan A. Suma, M. Bolas","doi":"10.1145/2659766.2661226","DOIUrl":"https://doi.org/10.1145/2659766.2661226","url":null,"abstract":"We present a novel approach to integrating a touch screen device into the experience of a user wearing a Head Mounted Display (HMD) in an immersive virtual reality (VR) environment with tracked head and hands.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115432126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Session details: Spatial gestures 会话细节:空间手势
Pub Date : 2014-10-04 DOI: 10.1145/3247432
H. Ishii
{"title":"Session details: Spatial gestures","authors":"H. Ishii","doi":"10.1145/3247432","DOIUrl":"https://doi.org/10.1145/3247432","url":null,"abstract":"","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122839309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating a SLAM-based handheld augmented reality guidance system 评估一个基于slam的手持增强现实制导系统
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661212
Jarkko Polvi, Takafumi Taketomi, Goshiro Yamamoto, M. Billinghurst, C. Sandor, H. Kato
In this poster we present the design and evaluation of a Handheld Augmented Reality (HAR) prototype system for guidance.
在这张海报中,我们展示了用于指导的手持增强现实(HAR)原型系统的设计和评估。
{"title":"Evaluating a SLAM-based handheld augmented reality guidance system","authors":"Jarkko Polvi, Takafumi Taketomi, Goshiro Yamamoto, M. Billinghurst, C. Sandor, H. Kato","doi":"10.1145/2659766.2661212","DOIUrl":"https://doi.org/10.1145/2659766.2661212","url":null,"abstract":"In this poster we present the design and evaluation of a Handheld Augmented Reality (HAR) prototype system for guidance.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123830610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VideoHandles: replicating gestures to search through action-camera video VideoHandles:复制手势来搜索动作摄像机视频
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659784
Jarrod Knibbe, S. A. Seah, Mike Fraser
We present VideoHandles, a novel interaction technique to support rapid review of wearable video camera data by re-performing gestures as a search query. The availability of wearable video capture devices has led to a significant increase in activity logging across a range of domains. However, searching through and reviewing footage for data curation can be a laborious and painstaking process. In this paper we showcase the use of gestures as search queries to support review and navigation of video data. By exploring example self-captured footage across a range of activities, we propose two video data navigation styles using gestures: prospective gesture tagging and retrospective gesture searching. We describe VideoHandles' interaction design, motivation and results of a pilot study.
我们提出了VideoHandles,这是一种新的交互技术,通过重新执行手势作为搜索查询来支持可穿戴摄像机数据的快速审查。可穿戴视频捕捉设备的可用性导致了一系列领域活动记录的显著增加。然而,搜索和审查视频以进行数据管理可能是一个费力而艰苦的过程。在本文中,我们展示了使用手势作为搜索查询来支持视频数据的审查和导航。通过在一系列活动中探索示例自捕捉镜头,我们提出了两种使用手势的视频数据导航样式:前瞻性手势标记和回顾性手势搜索。我们描述了VideoHandles的交互设计、动机和初步研究的结果。
{"title":"VideoHandles: replicating gestures to search through action-camera video","authors":"Jarrod Knibbe, S. A. Seah, Mike Fraser","doi":"10.1145/2659766.2659784","DOIUrl":"https://doi.org/10.1145/2659766.2659784","url":null,"abstract":"We present VideoHandles, a novel interaction technique to support rapid review of wearable video camera data by re-performing gestures as a search query. The availability of wearable video capture devices has led to a significant increase in activity logging across a range of domains. However, searching through and reviewing footage for data curation can be a laborious and painstaking process. In this paper we showcase the use of gestures as search queries to support review and navigation of video data. By exploring example self-captured footage across a range of activities, we propose two video data navigation styles using gestures: prospective gesture tagging and retrospective gesture searching. We describe VideoHandles' interaction design, motivation and results of a pilot study.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127197008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Re:form: rapid designing system based on fusion and illusion of digital/physical models Re:form:基于数字/物理模型融合与幻觉的快速设计系统
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661205
Keiko Yamamoto, I. Kanaya, M. Bordegoni, U. Cugini
Our goal is to allow the creators to focus on their creative activity, developing their ideas for physical products in an intuitive way. We propose a new CAD system allows users to draw virtual lines on the surface of the physical object using see-through AR, and also allows users to import 3D data and make its real object through 3D printing.
我们的目标是让创作者专注于他们的创作活动,以直观的方式发展他们对实体产品的想法。我们提出了一种新的CAD系统,允许用户使用透视AR在物理对象表面绘制虚拟线,并允许用户导入3D数据并通过3D打印制作其真实对象。
{"title":"Re:form: rapid designing system based on fusion and illusion of digital/physical models","authors":"Keiko Yamamoto, I. Kanaya, M. Bordegoni, U. Cugini","doi":"10.1145/2659766.2661205","DOIUrl":"https://doi.org/10.1145/2659766.2661205","url":null,"abstract":"Our goal is to allow the creators to focus on their creative activity, developing their ideas for physical products in an intuitive way. We propose a new CAD system allows users to draw virtual lines on the surface of the physical object using see-through AR, and also allows users to import 3D data and make its real object through 3D printing.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133873604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Session details: Hybrid interaction spaces 会话细节:混合交互空间
Pub Date : 2014-10-04 DOI: 10.1145/3247434
B. Fröhlich
{"title":"Session details: Hybrid interaction spaces","authors":"B. Fröhlich","doi":"10.1145/3247434","DOIUrl":"https://doi.org/10.1145/3247434","url":null,"abstract":"","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"48 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114039154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fisheye vision: peripheral spatial compression for improved field of view in head mounted displays 鱼眼视觉:改善头戴式显示器视野的外围空间压缩
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659771
J. Orlosky, Qifan Wu, K. Kiyokawa, H. Takemura, Christian Nitschke
A current problem with many video see-through displays is the lack of a wide field of view, which can make them dangerous to use in real world augmented reality applications since peripheral vision is severely limited. Existing wide field of view displays are often bulky, lack stereoscopy, or require complex setups. To solve this problem, we introduce a prototype that utilizes fisheye lenses to expand a user's peripheral vision inside a video see-through head mounted display. Our system provides an undistorted central field of view, so that natural stereoscopy and depth judgment can occur. The peripheral areas of the display show content through the curvature of each of two fisheye lenses using a modified compression algorithm so that objects outside of the inherent viewing angle of the display become visible. We first test an initial prototype with 180° field of view lenses, and then build an improved version with 238° lenses. We also describe solutions to several problems associated with aligning undistorted binocular vision and the compressed periphery, and finally compare our prototype to natural human vision in a series of visual acuity experiments. Results show that users can effectively see objects up to 180°, and that overall detection rate is 62.2% for the display versus 89.7% for the naked eye.
目前许多视频透明显示器的一个问题是缺乏广阔的视野,这使得它们在现实世界的增强现实应用中使用起来很危险,因为周边视觉受到严重限制。现有的宽视场显示器通常体积庞大,缺乏立体感,或者需要复杂的设置。为了解决这个问题,我们介绍了一个原型,它利用鱼眼镜头来扩展用户在视频透明头戴式显示器中的周边视觉。我们的系统提供了一个不扭曲的中心视野,因此自然立体和深度判断可以发生。显示器的外围区域通过使用改进的压缩算法的两个鱼眼镜头中的每一个的曲率显示内容,以便在显示器的固有视角之外的物体变得可见。我们首先使用180°视场镜头测试初始原型,然后使用238°镜头构建改进版本。我们还描述了与对准未扭曲的双眼视觉和压缩周边相关的几个问题的解决方案,并在一系列视觉灵敏度实验中将我们的原型与自然人类视觉进行了比较。结果表明,用户可以有效地看到180°范围内的物体,显示器的整体检测率为62.2%,而肉眼的检测率为89.7%。
{"title":"Fisheye vision: peripheral spatial compression for improved field of view in head mounted displays","authors":"J. Orlosky, Qifan Wu, K. Kiyokawa, H. Takemura, Christian Nitschke","doi":"10.1145/2659766.2659771","DOIUrl":"https://doi.org/10.1145/2659766.2659771","url":null,"abstract":"A current problem with many video see-through displays is the lack of a wide field of view, which can make them dangerous to use in real world augmented reality applications since peripheral vision is severely limited. Existing wide field of view displays are often bulky, lack stereoscopy, or require complex setups. To solve this problem, we introduce a prototype that utilizes fisheye lenses to expand a user's peripheral vision inside a video see-through head mounted display. Our system provides an undistorted central field of view, so that natural stereoscopy and depth judgment can occur. The peripheral areas of the display show content through the curvature of each of two fisheye lenses using a modified compression algorithm so that objects outside of the inherent viewing angle of the display become visible. We first test an initial prototype with 180° field of view lenses, and then build an improved version with 238° lenses. We also describe solutions to several problems associated with aligning undistorted binocular vision and the compressed periphery, and finally compare our prototype to natural human vision in a series of visual acuity experiments. Results show that users can effectively see objects up to 180°, and that overall detection rate is 62.2% for the display versus 89.7% for the naked eye.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123644433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
期刊
Proceedings of the 2nd ACM symposium on Spatial user interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1