首页 > 最新文献

2011 International Conference on Virtual Reality and Visualization最新文献

英文 中文
Motion Control of Virtual Human Based on Optical Motion Capture in Immersive Virtual Maintenance System 沉浸式虚拟维修系统中基于光学运动捕捉的虚拟人运动控制
Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.24
Chen Shanmin, Ning Tao, Wang Ke
The application of immersive virtual maintenance technology can find the problems of the products in the process of design, guaranteeing the qualities and reducing the life-cycle cost. As an indispensable part in immersive virtual maintenance, motion control for virtual human is a critical factor for promoting simulation efficiency. But it still remains on basis of motion editing or images, causing it a time-consuming task to simulate the maintenance process. Therefore, we propose a real time motion control algorithm based on optical motion capture, making the virtual maintenance both immersive and efficient. To ensure the algorithm fast enough, an editable human model is constructed based on simplified human skeleton. In order to make the virtual human work in an unlimited range while the simulation worker moves in a limited space, a walking gesture is defined and a prototype of virtual human's action database is built. To obtain continuous visual effects, the view direction of the virtual human has been smoothed by adopting a gyroscope. Finally, the algorithm and its effectiveness have been proven by experiments.
沉浸式虚拟维修技术的应用可以在设计过程中发现产品存在的问题,保证产品的质量,降低产品的全生命周期成本。虚拟人的运动控制作为沉浸式虚拟维修中不可缺少的一部分,是提高仿真效率的关键因素。但它仍然是基于动态编辑或图像,导致模拟维护过程是一项耗时的任务。因此,我们提出了一种基于光学运动捕捉的实时运动控制算法,使虚拟维护既具有沉浸感又高效。为了保证算法的快速性,在简化人体骨架的基础上构建了可编辑的人体模型。为了使虚拟人在无限的范围内工作,而仿真人在有限的空间内活动,定义了行走手势,建立了虚拟人动作数据库原型。为了获得连续的视觉效果,采用陀螺仪对虚拟人的视觉方向进行平滑处理。最后,通过实验验证了该算法的有效性。
{"title":"Motion Control of Virtual Human Based on Optical Motion Capture in Immersive Virtual Maintenance System","authors":"Chen Shanmin, Ning Tao, Wang Ke","doi":"10.1109/ICVRV.2011.24","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.24","url":null,"abstract":"The application of immersive virtual maintenance technology can find the problems of the products in the process of design, guaranteeing the qualities and reducing the life-cycle cost. As an indispensable part in immersive virtual maintenance, motion control for virtual human is a critical factor for promoting simulation efficiency. But it still remains on basis of motion editing or images, causing it a time-consuming task to simulate the maintenance process. Therefore, we propose a real time motion control algorithm based on optical motion capture, making the virtual maintenance both immersive and efficient. To ensure the algorithm fast enough, an editable human model is constructed based on simplified human skeleton. In order to make the virtual human work in an unlimited range while the simulation worker moves in a limited space, a walking gesture is defined and a prototype of virtual human's action database is built. To obtain continuous visual effects, the view direction of the virtual human has been smoothed by adopting a gyroscope. Finally, the algorithm and its effectiveness have been proven by experiments.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132184553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Multi-cue Based Discriminative Visual Object Contour Tracking 基于多线索的判别性视觉目标轮廓跟踪
Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.52
Wang Aiping, Chen Zhiquan, Li Sikun
This paper proposes a discriminative visual object contour tracking algorithm using multi-cue fusion particle filter. A novel contour evolution energy is designed by integrating an incremental learning discriminative model into the parametric snake model, and such energy function is combined with a mixed cascade particle filter tracking algorithm fusing multiple observation models for accurate object contour tracking. In the proposed multi-cue fusion particle filter method, the incremental learning discriminative model is used to create observation model on appearance of the object, while the bending energy, calculated by the thin plate spline (TPS) model with multiple order graph matching between contours in two consecutive frames, together with the energy achieved from the contour evolution process, are both taken as observation models on contour deformation. Dealing with these multiple observation models, a mixed cascade important sampling process is adopted to fuse these observations efficiently. Besides, the dynamic model used in the tracking method is also improved by using the optical flow. Experiments on real videos show that our approach highly improves the performance of the object contour tracking.
提出了一种基于多线索融合粒子滤波的判别性视觉目标轮廓跟踪算法。将增量学习判别模型集成到参数化蛇形模型中,设计了一种新的轮廓演化能量函数,并将该能量函数与融合多观测模型的混合级联粒子滤波跟踪算法相结合,实现了对目标轮廓的精确跟踪。在多线索融合粒子滤波方法中,采用增量学习判别模型建立对目标外观的观测模型,并将连续两帧轮廓之间进行多阶图匹配的薄板样条(TPS)模型计算的弯曲能量与轮廓演化过程中获得的能量作为轮廓变形的观测模型。针对这些多观测模型,采用混合级联重要采样过程进行有效融合。此外,利用光流对跟踪方法中的动态模型进行了改进。在真实视频上的实验表明,我们的方法大大提高了目标轮廓跟踪的性能。
{"title":"Multi-cue Based Discriminative Visual Object Contour Tracking","authors":"Wang Aiping, Chen Zhiquan, Li Sikun","doi":"10.1109/ICVRV.2011.52","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.52","url":null,"abstract":"This paper proposes a discriminative visual object contour tracking algorithm using multi-cue fusion particle filter. A novel contour evolution energy is designed by integrating an incremental learning discriminative model into the parametric snake model, and such energy function is combined with a mixed cascade particle filter tracking algorithm fusing multiple observation models for accurate object contour tracking. In the proposed multi-cue fusion particle filter method, the incremental learning discriminative model is used to create observation model on appearance of the object, while the bending energy, calculated by the thin plate spline (TPS) model with multiple order graph matching between contours in two consecutive frames, together with the energy achieved from the contour evolution process, are both taken as observation models on contour deformation. Dealing with these multiple observation models, a mixed cascade important sampling process is adopted to fuse these observations efficiently. Besides, the dynamic model used in the tracking method is also improved by using the optical flow. Experiments on real videos show that our approach highly improves the performance of the object contour tracking.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130841931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Texture Feature Extraction Method Based on Regional Average Binary Gray Level Difference Co-occurrence Matrix 基于区域平均二值灰度差共生矩阵的图像纹理特征提取方法
Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.20
Jian Yang, Jingfeng Guo
Texture feature is a measure method about relationship among the pixels in local area, reflecting the changes of image space gray levels. This paper presents a texture feature extraction method based on regional average binary gray level difference co-occurrence matrix, which combined the texture structural analysis method with statistical method. Firstly, we calculate the average binary gray level difference of eight-neighbors of a pixel to get the average binary gray level difference image which expresses the variation pattern of the regional gray levels. Secondly, the regional co-occurrence matrix is constructed by using these average binary gray level differences. Finally, we extract the second-order statistic parameters reflecting the image texture feature from the regional co-occurrence matrix. Theoretical analysis and experimental results show that the image texture feature extraction method has certain accuracy and validity.
纹理特征是一种局部区域像素之间关系的度量方法,反映了图像空间灰度级的变化。提出了一种基于区域平均二值灰度差共现矩阵的纹理特征提取方法,将纹理结构分析方法与统计方法相结合。首先,计算像素的8个相邻点的平均二值灰度差,得到表达区域灰度变化规律的平均二值灰度差图像;其次,利用这些平均二值灰度差构造区域共现矩阵;最后,从区域共现矩阵中提取反映图像纹理特征的二阶统计参数。理论分析和实验结果表明,该图像纹理特征提取方法具有一定的准确性和有效性。
{"title":"Image Texture Feature Extraction Method Based on Regional Average Binary Gray Level Difference Co-occurrence Matrix","authors":"Jian Yang, Jingfeng Guo","doi":"10.1109/ICVRV.2011.20","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.20","url":null,"abstract":"Texture feature is a measure method about relationship among the pixels in local area, reflecting the changes of image space gray levels. This paper presents a texture feature extraction method based on regional average binary gray level difference co-occurrence matrix, which combined the texture structural analysis method with statistical method. Firstly, we calculate the average binary gray level difference of eight-neighbors of a pixel to get the average binary gray level difference image which expresses the variation pattern of the regional gray levels. Secondly, the regional co-occurrence matrix is constructed by using these average binary gray level differences. Finally, we extract the second-order statistic parameters reflecting the image texture feature from the regional co-occurrence matrix. Theoretical analysis and experimental results show that the image texture feature extraction method has certain accuracy and validity.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130024429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
CUDA-Based Volume Ray-Casting Using Cubic B-spline 基于cuda的三次b样条体射线投射
Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.10
Changgong Zhang, P. Xi, C. Zhang
GPU-based volume ray-casting can provide high performance for interactive medical visualization. The more samples we take along rays, i.e., a higher sampling rate, the more accurately we can represent the volume data, especially when the combined frequency of the volume and transfer function is high. However, this will reduce the rendering performance considerably because more samples mean more time-consuming memory access on GPU. In this paper, we propose an effective volume ray-casting algorithm which can perform more samplings within a ray segment using cubic B-spline. This can improve the sampling rate and offer high quality images without obvious performance degradation. Besides, our algorithm does not have to adjust anything else at all. This fact guarantees its flexibility and simplicity. We exploit the new programming interface CUDA to implement ray-casting rather than conventional fragment shader. Experimental results prove this method can be used as an effective medical visualization tool.
基于gpu的体射线投射可以为交互式医学可视化提供高性能。沿着射线采集的样本越多,即采样率越高,我们就越能准确地表示体积数据,特别是当体积和传递函数的组合频率很高时。然而,这将大大降低渲染性能,因为更多的样本意味着更耗时的GPU内存访问。在本文中,我们提出了一种有效的体射线投射算法,它可以在一个射线段内使用三次b样条进行更多的采样。这可以提高采样率,提供高质量的图像,而不会出现明显的性能下降。此外,我们的算法根本不需要调整其他任何东西。这个事实保证了它的灵活性和简单性。我们利用新的编程接口CUDA来实现光线投射,而不是传统的片段着色器。实验结果表明,该方法可以作为一种有效的医学可视化工具。
{"title":"CUDA-Based Volume Ray-Casting Using Cubic B-spline","authors":"Changgong Zhang, P. Xi, C. Zhang","doi":"10.1109/ICVRV.2011.10","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.10","url":null,"abstract":"GPU-based volume ray-casting can provide high performance for interactive medical visualization. The more samples we take along rays, i.e., a higher sampling rate, the more accurately we can represent the volume data, especially when the combined frequency of the volume and transfer function is high. However, this will reduce the rendering performance considerably because more samples mean more time-consuming memory access on GPU. In this paper, we propose an effective volume ray-casting algorithm which can perform more samplings within a ray segment using cubic B-spline. This can improve the sampling rate and offer high quality images without obvious performance degradation. Besides, our algorithm does not have to adjust anything else at all. This fact guarantees its flexibility and simplicity. We exploit the new programming interface CUDA to implement ray-casting rather than conventional fragment shader. Experimental results prove this method can be used as an effective medical visualization tool.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131005515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Interactive Visual Analysis of Vortex in 3D Flow with FFDL 三维流动中涡的FFDL交互式可视化分析
Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.30
Enya Shen, Huaxun Xu, Wenke Wang, Xun Cai, L. Zeng, Sikun Li
Feature visualization plays an important role in visualization of complicated flows because it can highlight the feature of the flows with a simplified representation. The traditional feature visualization methods may exact some important features in flow field imprecisely due to the lack of the knowledge and the experience of the user. This paper gives a particle-based visualization system which is developed with the application of the interactive fuzzy feature extraction and interactive visual analysis theories. To obtain a more precise feature extraction, we have proposed an interactive fuzzy feature description language (FFDL) and an interactive fuzzy feature extraction algorithm. Based on the work before, we introduced the proportion ration for different rules and optimized our algorithm in practice further by communicating with specific researchers and doing lots of experiments. The further experiments show that our method can not only make full use of the ability of the user to extract the features precisely, but also reflect the uncertainty of the numerical simulation data.
特征可视化在复杂流的可视化中起着重要的作用,因为它可以用一种简化的表示来突出流的特征。传统的特征可视化方法由于缺乏知识和用户的经验,可能无法准确地提取流场中的一些重要特征。本文应用交互式模糊特征提取和交互式可视化分析理论开发了一个基于粒子的可视化系统。为了获得更精确的特征提取,我们提出了交互式模糊特征描述语言(FFDL)和交互式模糊特征提取算法。在之前工作的基础上,我们引入了不同规则的比例比例,并通过与具体研究者的交流和大量的实验,在实践中进一步优化了我们的算法。进一步的实验表明,该方法既能充分利用用户精确提取特征的能力,又能很好地反映数值模拟数据的不确定性。
{"title":"Interactive Visual Analysis of Vortex in 3D Flow with FFDL","authors":"Enya Shen, Huaxun Xu, Wenke Wang, Xun Cai, L. Zeng, Sikun Li","doi":"10.1109/ICVRV.2011.30","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.30","url":null,"abstract":"Feature visualization plays an important role in visualization of complicated flows because it can highlight the feature of the flows with a simplified representation. The traditional feature visualization methods may exact some important features in flow field imprecisely due to the lack of the knowledge and the experience of the user. This paper gives a particle-based visualization system which is developed with the application of the interactive fuzzy feature extraction and interactive visual analysis theories. To obtain a more precise feature extraction, we have proposed an interactive fuzzy feature description language (FFDL) and an interactive fuzzy feature extraction algorithm. Based on the work before, we introduced the proportion ration for different rules and optimized our algorithm in practice further by communicating with specific researchers and doing lots of experiments. The further experiments show that our method can not only make full use of the ability of the user to extract the features precisely, but also reflect the uncertainty of the numerical simulation data.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134021851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on Interaction Technologies in Desktop Virtual Maintenance System of Certain Weapon 某型武器桌面虚拟维护系统交互技术研究
Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.27
Liu Pengyuan, Ma Long, Li Ruihua
In the desktop virtual maintenance system of certain weapon, the whole interaction process is divided into pickup, drag and release three phases with common interaction devices as 2-d mouse and keyboard. As the mouse only consists of 2-d screen coordinate information and the virtual entities have 3-d world coordinate, it is necessary to put up a mapping between screen coordinate system and 3-d world coordinate system. Based on the mapping, the key technologies of each phase are given including acupuncture pickup method, rough area judgment method, drag and release control methods etc. The methods of pickup, drag and release of 3-d virtual entities are applied to certain weapon desktop virtual maintenance system, and it is to be proved both the performance of real-time and the sense of reality are increased effectively.
在某武器桌面虚拟维护系统中,整个交互过程分为拾取、拖动和释放三个阶段,常用的交互设备为二维鼠标和键盘。由于鼠标仅包含二维屏幕坐标信息,而虚拟实体具有三维世界坐标,因此有必要建立屏幕坐标系与三维世界坐标系的映射关系。在此基础上,给出了各阶段的关键技术,包括针刺提取法、粗糙面积判断法、拖放控制法等。将三维虚拟实体的拾取、拖拽和释放方法应用于某武器桌面虚拟维修系统,结果表明,该方法有效提高了系统的实时性和真实感。
{"title":"Research on Interaction Technologies in Desktop Virtual Maintenance System of Certain Weapon","authors":"Liu Pengyuan, Ma Long, Li Ruihua","doi":"10.1109/ICVRV.2011.27","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.27","url":null,"abstract":"In the desktop virtual maintenance system of certain weapon, the whole interaction process is divided into pickup, drag and release three phases with common interaction devices as 2-d mouse and keyboard. As the mouse only consists of 2-d screen coordinate information and the virtual entities have 3-d world coordinate, it is necessary to put up a mapping between screen coordinate system and 3-d world coordinate system. Based on the mapping, the key technologies of each phase are given including acupuncture pickup method, rough area judgment method, drag and release control methods etc. The methods of pickup, drag and release of 3-d virtual entities are applied to certain weapon desktop virtual maintenance system, and it is to be proved both the performance of real-time and the sense of reality are increased effectively.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132156126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Modeling and Simulation on Radar Detection Range under Complex Electromagnetic Environment 复杂电磁环境下雷达探测距离建模与仿真
Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.55
Xiao Bin, Sun Chunsheng
A SERIES MODELS FOR RADAR DETECTION RANGE UNDER COMPLEX ELECTROMAGNETIC ENVIRONMENT WERE ESTABLISHED, INCLUDING ANTENNA GAIN, PROPAGATION IN MULTI-PATH, ATTENUATION, CLUTTERS OF RAINFALL AND SEA SURFACE, AND ACTIVE ELECTRICAL JAMMING. RADAR RANGE SIMULATION WITH VISUALIZATION IS IMPLEMENTED AND PROVIDES DIRECT IMAGE FOR TACTICAL DECISION.
建立了复杂电磁环境下雷达探测距离的天线增益、多径传播、衰减、降水和海面杂波、有源电干扰等一系列模型。实现了雷达距离的可视化仿真,为战术决策提供了直观的图像。
{"title":"Modeling and Simulation on Radar Detection Range under Complex Electromagnetic Environment","authors":"Xiao Bin, Sun Chunsheng","doi":"10.1109/ICVRV.2011.55","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.55","url":null,"abstract":"A SERIES MODELS FOR RADAR DETECTION RANGE UNDER COMPLEX ELECTROMAGNETIC ENVIRONMENT WERE ESTABLISHED, INCLUDING ANTENNA GAIN, PROPAGATION IN MULTI-PATH, ATTENUATION, CLUTTERS OF RAINFALL AND SEA SURFACE, AND ACTIVE ELECTRICAL JAMMING. RADAR RANGE SIMULATION WITH VISUALIZATION IS IMPLEMENTED AND PROVIDES DIRECT IMAGE FOR TACTICAL DECISION.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129900239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2011 International Conference on Virtual Reality and Visualization
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1