首页 > 最新文献

计算机辅助设计与图形学学报最新文献

英文 中文
Survey on Visualization of Biomacromolecule 生物大分子可视化研究综述
Q3 Computer Science Pub Date : 2021-12-01 DOI: 10.3724/sp.j.1089.2021.19265
Dongliang Guo, Yanfen Wang, Yu Li, Ya. Guo, Ximing Xu, Junlan Nie
{"title":"Survey on Visualization of Biomacromolecule","authors":"Dongliang Guo, Yanfen Wang, Yu Li, Ya. Guo, Ximing Xu, Junlan Nie","doi":"10.3724/sp.j.1089.2021.19265","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.19265","url":null,"abstract":"","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48565422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
t-SNE for Complex Multi-Manifold High-Dimensional Data 复杂多流形高维数据的t-SNE
Q3 Computer Science Pub Date : 2021-11-01 DOI: 10.3724/sp.j.1089.2021.18806
Rongzhen Bian, Jian Zhang, Liang Zhou, Peng Jiang, Baoquan Chen, Yunhai Wang
To solve the problem that the t-SNE method cannot distinguish multiple manifolds that intersect each other well, a visual dimensionality reduction method is proposed. Based on the t-SNE method, Euclidean metric and local PCA are considered when calculating high-dimensional probability to distinguish different manifolds. Then the t-SNE gradient solution method can be directly used to get the dimensionality reduction result. Finally, three generated data and two real data are used to test proposed method, and quantitatively evaluate the discrimination of different manifolds and the degree of neighborhood preservation within the manifold in the dimensionality reduction results. These results show that proposed method is more useful when processing multi-manifold data, and can keep the neighborhood structure of each manifold well.
针对t-SNE方法不能很好地区分多个相交流形的问题,提出了一种视觉降维方法。在t-SNE方法的基础上,在计算高维概率时考虑欧几里德度量和局部主成分分析来区分不同的流形。然后可以直接使用t-SNE梯度解法得到降维结果。最后,利用生成的3个数据和2个实际数据对所提出的方法进行了测试,并定量评价了降维结果中不同流形的区分程度和流形内邻域保持程度。结果表明,该方法在处理多流形数据时更有效,且能很好地保持各流形的邻域结构。
{"title":"t-SNE for Complex Multi-Manifold High-Dimensional Data","authors":"Rongzhen Bian, Jian Zhang, Liang Zhou, Peng Jiang, Baoquan Chen, Yunhai Wang","doi":"10.3724/sp.j.1089.2021.18806","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18806","url":null,"abstract":"To solve the problem that the t-SNE method cannot distinguish multiple manifolds that intersect each other well, a visual dimensionality reduction method is proposed. Based on the t-SNE method, Euclidean metric and local PCA are considered when calculating high-dimensional probability to distinguish different manifolds. Then the t-SNE gradient solution method can be directly used to get the dimensionality reduction result. Finally, three generated data and two real data are used to test proposed method, and quantitatively evaluate the discrimination of different manifolds and the degree of neighborhood preservation within the manifold in the dimensionality reduction results. These results show that proposed method is more useful when processing multi-manifold data, and can keep the neighborhood structure of each manifold well.","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45016896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Focus Image Fusion Based on Generative Adversarial Network 基于生成对抗网络的多焦点图像融合
Q3 Computer Science Pub Date : 2021-11-01 DOI: 10.3724/sp.j.1089.2021.18770
Liubing Jiang, Dian Zhang, Bo Pan, Peng Zheng, L. Che
{"title":"Multi-Focus Image Fusion Based on Generative Adversarial Network","authors":"Liubing Jiang, Dian Zhang, Bo Pan, Peng Zheng, L. Che","doi":"10.3724/sp.j.1089.2021.18770","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18770","url":null,"abstract":"","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42451378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Human-in-the-Loop Based Online Handwriting Mathematical Expressions Recognition 基于人在环的在线手写数学表达式识别
Q3 Computer Science Pub Date : 2021-11-01 DOI: 10.3724/sp.j.1089.2021.18796
Wenhui Kang, Jin Huang, Feng Tian, Xiangmin Fan, Jie Liu, G. Dai
{"title":"Human-in-the-Loop Based Online Handwriting Mathematical Expressions Recognition","authors":"Wenhui Kang, Jin Huang, Feng Tian, Xiangmin Fan, Jie Liu, G. Dai","doi":"10.3724/sp.j.1089.2021.18796","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18796","url":null,"abstract":"","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44590528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RGB-D Image Saliency Detection Based on Cross-Model Feature Fusion 基于跨模型特征融合的RGB-D图像显著性检测
Q3 Computer Science Pub Date : 2021-11-01 DOI: 10.3724/sp.j.1089.2021.18710
Zheng Chen, Xiaoli Zhao, Jiaying Zhang, Mingchen Yin, Hanchen Ye, Haobin Zhou
{"title":"RGB-D Image Saliency Detection Based on Cross-Model Feature Fusion","authors":"Zheng Chen, Xiaoli Zhao, Jiaying Zhang, Mingchen Yin, Hanchen Ye, Haobin Zhou","doi":"10.3724/sp.j.1089.2021.18710","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18710","url":null,"abstract":"","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45985210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Freehand-Sketched Part Recognition Using VGG-CapsNet 基于VGG CapsNet的手绘零件识别
Q3 Computer Science Pub Date : 2021-11-01 DOI: 10.3724/sp.j.1089.2021.18774
Zhongliang Yang, Ruihong Huang, Yumiao Chen, Song Zhang, Xinhua Mao
: To solve the problem that the existing CAD system is difficult to match the corresponding parts accurately through the freehand sketch in the conceptual design, a recognition model (VGG-CapsNet) for freehand sketch of part is proposed, which combining the pre-trained network (VGG) and capsule network (CapsNet). Five designers are recruited to sketch parts, and build 23 kinds of freehand sketch of parts in-cluding standard parts and non-standard parts. The between-group experiment and within-group experiment are designed, and then the recognition models of VGG-CapsNet are constructed respectively. The recognition results of the VGG-CapsNet models are compared with the rVGG-13 models and the rCNN-13 models. The experimental results show that the mean accuracy of VGG-CapsNet model is higher than the other two models, which provides technical support for the retrieval and reuse of part design knowledge.
为解决现有CAD系统在概念设计中难以通过手绘草图准确匹配相应零件的问题,提出了一种结合预训练网络(VGG)和胶囊网络(CapsNet)的零件手绘草图识别模型(VGG-CapsNet)。招募5名设计师绘制零件草图,绘制标准件和非标准件共23种零件手绘草图。设计了组间实验和组内实验,分别构建了VGG-CapsNet的识别模型。将VGG-CapsNet模型与rVGG-13模型和rCNN-13模型的识别结果进行了比较。实验结果表明,VGG-CapsNet模型的平均精度高于其他两种模型,为零件设计知识的检索和重用提供了技术支持。
{"title":"Freehand-Sketched Part Recognition Using VGG-CapsNet","authors":"Zhongliang Yang, Ruihong Huang, Yumiao Chen, Song Zhang, Xinhua Mao","doi":"10.3724/sp.j.1089.2021.18774","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18774","url":null,"abstract":": To solve the problem that the existing CAD system is difficult to match the corresponding parts accurately through the freehand sketch in the conceptual design, a recognition model (VGG-CapsNet) for freehand sketch of part is proposed, which combining the pre-trained network (VGG) and capsule network (CapsNet). Five designers are recruited to sketch parts, and build 23 kinds of freehand sketch of parts in-cluding standard parts and non-standard parts. The between-group experiment and within-group experiment are designed, and then the recognition models of VGG-CapsNet are constructed respectively. The recognition results of the VGG-CapsNet models are compared with the rVGG-13 models and the rCNN-13 models. The experimental results show that the mean accuracy of VGG-CapsNet model is higher than the other two models, which provides technical support for the retrieval and reuse of part design knowledge.","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42714045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Measurement of Elongation at Break of Cable Sheath Based on Binocular Vision 基于双目视觉的电缆护套断裂伸长率自动测量
Q3 Computer Science Pub Date : 2021-11-01 DOI: 10.3724/sp.j.1089.2021.18779
G. Zhang, Jun Gong, Junsong Chen, Zhidong Zhang, Ce Zhu, Kai Liu
{"title":"Automatic Measurement of Elongation at Break of Cable Sheath Based on Binocular Vision","authors":"G. Zhang, Jun Gong, Junsong Chen, Zhidong Zhang, Ce Zhu, Kai Liu","doi":"10.3724/sp.j.1089.2021.18779","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18779","url":null,"abstract":"","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45781114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ship Target Recognition Under Different Sunlight Intensity 不同日照强度下的船舶目标识别
Q3 Computer Science Pub Date : 2021-11-01 DOI: 10.3724/sp.j.1089.2021.18777
Kun Liu, L. Mi
: In the case of surface target monitoring, the clarity of the ship target often varies with the reflec-tion intensity of the sea surface under different sunlight intensity, which will lead to the unstable recognition rate of the ship target and increase the false alarm rate. For this reason, the ship target recognition algorithm based on ResNet-50 is proposed. Firstly, it uses ResNet-50 network to extract image feature information and applies sunlight robust loss constraint to the features before and after sunlight intensity change to reduce the feature difference. Then, it uses gray-scale histogram to calculate the statistical matrices of features to obtain six features: light contrast, brightness, smoothness, information, third-order matrices and entropy, and gen-erates new feature vector to apply sunlight robust loss constraint to the features before and after sunlight intensity change again. Finally, the two constraints are combined to form a loss function and trained to opti-mize the optimal weights using Bayesian adaptive hyperparameters. The experimental results show that the average recognition rate of the database for ship sunlight variation reaches 90.47%, which is about 4.00% the and the recognition rate of ship images with sunlight variation of and increases by 3.14%, 6.07% and 16.41%, shows that the algorithm has a good constraint effect on sunlight variation and the recognition rate is significantly improved.
:在水面目标监测的情况下,在不同的阳光强度下,船舶目标的清晰度往往会随着海面的反射强度而变化,这会导致船舶目标的识别率不稳定,并增加误报率。为此,提出了基于ResNet-50的船舶目标识别算法。首先,它使用ResNet-50网络提取图像特征信息,并对阳光强度变化前后的特征应用阳光鲁棒损失约束,以减少特征差异。然后,利用灰度直方图计算特征的统计矩阵,得到光照对比度、亮度、平滑度、信息、三阶矩阵和熵六个特征,并生成新的特征向量,对阳光强度变化前后的特征再次应用阳光鲁棒损失约束。最后,将这两个约束组合起来形成损失函数,并使用贝叶斯自适应超参数对其进行训练以优化最优权重。实验结果表明,该数据库对船舶日照变化的平均识别率达到90.47%,约为4.00%,对日照变化为和的船舶图像的识别率分别提高了3.14%、6.07%和16.41%,表明该算法对日照变化有很好的约束作用,识别率显著提高。
{"title":"Ship Target Recognition Under Different Sunlight Intensity","authors":"Kun Liu, L. Mi","doi":"10.3724/sp.j.1089.2021.18777","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18777","url":null,"abstract":": In the case of surface target monitoring, the clarity of the ship target often varies with the reflec-tion intensity of the sea surface under different sunlight intensity, which will lead to the unstable recognition rate of the ship target and increase the false alarm rate. For this reason, the ship target recognition algorithm based on ResNet-50 is proposed. Firstly, it uses ResNet-50 network to extract image feature information and applies sunlight robust loss constraint to the features before and after sunlight intensity change to reduce the feature difference. Then, it uses gray-scale histogram to calculate the statistical matrices of features to obtain six features: light contrast, brightness, smoothness, information, third-order matrices and entropy, and gen-erates new feature vector to apply sunlight robust loss constraint to the features before and after sunlight intensity change again. Finally, the two constraints are combined to form a loss function and trained to opti-mize the optimal weights using Bayesian adaptive hyperparameters. The experimental results show that the average recognition rate of the database for ship sunlight variation reaches 90.47%, which is about 4.00% the and the recognition rate of ship images with sunlight variation of and increases by 3.14%, 6.07% and 16.41%, shows that the algorithm has a good constraint effect on sunlight variation and the recognition rate is significantly improved.","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43438275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implicit Progressive-Iterative Algorithm of Curves and Surfaces with Compactly Supported Radial Basis Functions 紧支持径向基函数曲线曲面的隐式渐进迭代算法
Q3 Computer Science Pub Date : 2021-11-01 DOI: 10.3724/sp.j.1089.2021.18807
Haibo Wang, Tao Liu, Shengjun Liu, Wenyan Wei, Xinru Liu, Pingbo Liu, Yanyu Bai, Yue Chen
{"title":"Implicit Progressive-Iterative Algorithm of Curves and Surfaces with Compactly Supported Radial Basis Functions","authors":"Haibo Wang, Tao Liu, Shengjun Liu, Wenyan Wei, Xinru Liu, Pingbo Liu, Yanyu Bai, Yue Chen","doi":"10.3724/sp.j.1089.2021.18807","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18807","url":null,"abstract":"","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44412509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Hyperspectral Image Classification Based on SE-Res2Net and Multi-Scale Spatial Spectral Fusion Attention Mechanism 基于SE-Res2Net和多尺度空间光谱融合注意机制的高光谱图像分类
Q3 Computer Science Pub Date : 2021-11-01 DOI: 10.3724/sp.j.1089.2021.18778
Qin Xu, Yulian Liang, Dongyue Wang, B. Luo
In order to extract more discriminative features for hyperspectral image and prevent the network from degrading caused by deepening, a novel multi-scale feature extraction module SE-Res2Net based on the new dimensional residual network (Res2Net) and squeeze and exception network (SENet), and a multi-scale spectral-spatial fusion attention module is developed for hyperspectral image classification. In order to overcome the degradation problem caused by network deepening, the SE-Res2Net module uses channel grouping to extract fine-grained multi-scale features of hyperspectral images, and gets multiple receptive fields of different granularity. Then, the channel optimization module is employed to quantify the importance of the feature maps at the channel level. In order to optimize the features from spatial and spectral dimensions simultaneously, a multi-scale spectral-spatial fusion attention module is designed to mine the relationship between different spatial positions and different spectral dimensions at different scales using 第 11 期 徐沁, 等: 基于 SE-Res2Net 与多尺度空谱融合注意力机制的高光谱图像分类 1727 asymmetric convolution, which can not only reduce the computation, but also effectively extract the discriminative spectral-spatial fusion features, and further improve the accuracy of hyperspectral image classification. Comparison experiments on three public datasets, Indian Pines, University of Pavia and Grss_dfc_2013 show that the proposed method has higher overall accuracy (OA), average accuracy (AA) and Kappa coefficient compared to other state-of-the-art deep networks.
为了对高光谱图像提取更多的判别特征,防止网络深度退化,基于新维残差网络(Res2Net)和挤压例外网络(SENet)开发了一种新的多尺度特征提取模块SE-Res2Net,以及用于高光谱图像分类的多尺度光谱-空间融合关注模块。为了克服网络深化带来的退化问题,SE-Res2Net模块采用通道分组的方法提取高光谱图像的细粒度多尺度特征,得到不同粒度的多个感受场。然后,利用通道优化模块对通道级特征映射的重要性进行量化。为了同时优化空间和光谱维度的特征,设计了多尺度光谱-空间融合关注模块,利用不同尺度下不同空间位置与不同光谱维度之间的关系,挖掘不同空间位置与不同光谱维度之间的关系。不对称卷积,不仅可以减少计算量,还可以有效提取判别光谱-空间融合特征,进一步提高高光谱图像分类的精度。在Indian Pines、University of Pavia和Grss_dfc_2013三个公开数据集上的对比实验表明,与其他最先进的深度网络相比,该方法具有更高的总体精度(OA)、平均精度(AA)和Kappa系数。
{"title":"Hyperspectral Image Classification Based on SE-Res2Net and Multi-Scale Spatial Spectral Fusion Attention Mechanism","authors":"Qin Xu, Yulian Liang, Dongyue Wang, B. Luo","doi":"10.3724/sp.j.1089.2021.18778","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18778","url":null,"abstract":"In order to extract more discriminative features for hyperspectral image and prevent the network from degrading caused by deepening, a novel multi-scale feature extraction module SE-Res2Net based on the new dimensional residual network (Res2Net) and squeeze and exception network (SENet), and a multi-scale spectral-spatial fusion attention module is developed for hyperspectral image classification. In order to overcome the degradation problem caused by network deepening, the SE-Res2Net module uses channel grouping to extract fine-grained multi-scale features of hyperspectral images, and gets multiple receptive fields of different granularity. Then, the channel optimization module is employed to quantify the importance of the feature maps at the channel level. In order to optimize the features from spatial and spectral dimensions simultaneously, a multi-scale spectral-spatial fusion attention module is designed to mine the relationship between different spatial positions and different spectral dimensions at different scales using 第 11 期 徐沁, 等: 基于 SE-Res2Net 与多尺度空谱融合注意力机制的高光谱图像分类 1727 asymmetric convolution, which can not only reduce the computation, but also effectively extract the discriminative spectral-spatial fusion features, and further improve the accuracy of hyperspectral image classification. Comparison experiments on three public datasets, Indian Pines, University of Pavia and Grss_dfc_2013 show that the proposed method has higher overall accuracy (OA), average accuracy (AA) and Kappa coefficient compared to other state-of-the-art deep networks.","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45625351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
计算机辅助设计与图形学学报
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1