首页 > 最新文献

2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis最新文献

英文 中文
Representation and classification of iris textures based on diagonal linear discriminant analysis 基于对角线性判别分析的虹膜纹理表示与分类
Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970356
E. Assunção, J. R. Pereira, M. Costa, C. Filho, Rafael Padilla
Subspace methods are frequently used in pattern recognition problems aiming to reduce space dimension by determining its projection vectors. This paper presents subspace methods for feature extraction in an iris image called two-dimensional linear discriminant analysis (2DLDA), diagonal linear discriminant analysis (DiaLDA) and their combination (DiaLDA+2DLDA). The methods were applied in an UBIRIS image database, and the experimental results showed that DiaLDA+2DLDA overcame the 2DLDA method in recognition accuracy. Both methods are powerful in terms of dimension reduction and class discrimination.
子空间方法常用于模式识别问题,其目的是通过确定空间的投影向量来降低空间维数。本文提出了用于虹膜图像特征提取的子空间方法,即二维线性判别分析(2DLDA)、对角线性判别分析(DiaLDA)及其组合(DiaLDA+2DLDA)。将该方法应用于UBIRIS图像数据库,实验结果表明,DiaLDA+2DLDA在识别精度上优于2DLDA方法。这两种方法在降维和阶级区分方面都很强大。
{"title":"Representation and classification of iris textures based on diagonal linear discriminant analysis","authors":"E. Assunção, J. R. Pereira, M. Costa, C. Filho, Rafael Padilla","doi":"10.1109/IVMSPW.2011.5970356","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970356","url":null,"abstract":"Subspace methods are frequently used in pattern recognition problems aiming to reduce space dimension by determining its projection vectors. This paper presents subspace methods for feature extraction in an iris image called two-dimensional linear discriminant analysis (2DLDA), diagonal linear discriminant analysis (DiaLDA) and their combination (DiaLDA+2DLDA). The methods were applied in an UBIRIS image database, and the experimental results showed that DiaLDA+2DLDA overcame the 2DLDA method in recognition accuracy. Both methods are powerful in terms of dimension reduction and class discrimination.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124542504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
H.264 visually lossless compressibility index: Psychophysics and algorithm design H.264视觉无损压缩指数:心理物理学和算法设计
Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970364
Anush K. Moorthy, A. Bovik
Although the term ‘visually lossless’ (VL) has been used liberally in the video compression literature, there does not seem to be a systematic evaluation of what it means for a video to be compressed visually lossless-ly. Here, we undertake a psychovisual study to infer the visually lossless threshold for H.264 compression of videos spanning a wide range of contents. Based on results from this study, we then propose a compressibility index which provides a measure of the appropriate bit-rate for VL H.264 compression of a video given texture (i.e., spatial activity) and motion (i.e., temporal activity) information. This compressibility index has been made available online at [1] in order to facilitate practical application of the research presented here and to further research in the area of VL compression.
尽管术语“视觉无损”(VL)在视频压缩文献中被广泛使用,但似乎没有一个系统的评估它对视频进行视觉无损压缩意味着什么。在这里,我们进行了一项心理视觉研究,以推断H.264压缩视频的视觉无损阈值。基于这项研究的结果,我们提出了一个可压缩性指数,该指数为给定纹理(即空间活动)和运动(即时间活动)信息的视频的VL H.264压缩提供了适当的比特率度量。为了促进本文研究的实际应用,并进一步研究VL压缩领域,该可压缩性指数已在网上公布[1]。
{"title":"H.264 visually lossless compressibility index: Psychophysics and algorithm design","authors":"Anush K. Moorthy, A. Bovik","doi":"10.1109/IVMSPW.2011.5970364","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970364","url":null,"abstract":"Although the term ‘visually lossless’ (VL) has been used liberally in the video compression literature, there does not seem to be a systematic evaluation of what it means for a video to be compressed visually lossless-ly. Here, we undertake a psychovisual study to infer the visually lossless threshold for H.264 compression of videos spanning a wide range of contents. Based on results from this study, we then propose a compressibility index which provides a measure of the appropriate bit-rate for VL H.264 compression of a video given texture (i.e., spatial activity) and motion (i.e., temporal activity) information. This compressibility index has been made available online at [1] in order to facilitate practical application of the research presented here and to further research in the area of VL compression.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127786994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A perceptual video quality model for mobile platform considering impact of spatial, temporal, and amplitude resolutions 考虑空间、时间和振幅分辨率影响的移动平台感知视频质量模型
Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970365
Yen-Fu Ou, Yuanyi Xue, Zhan Ma, Yao Wang
In this paper, we investigate the impact of spatial, temporal and amplitude resolution (STAR) on the perceptual quality of a compressed video. Subjective quality tests were carried out on the TI Zoom2 mobile development platform (MDP). Seven source sequences are included in the tests and for each source sequence we have 32 test configurations generated by JSVM encoder (4 QP levels, 5 spatial resolutions, and 3 temporal resolutions) and a total of 224 processed video sequences (PVSs). Videos coded at different spatial resolutions are displayed at the full screen size of the mobile platform. We report the impact of the spatial resolution (SR), temporal resolution (TR) and quantization stepsize (QS) on the perceptual quality, individually as well as jointly. We found that the impact of SR, TR and QS can each be captured by a function with a single content-dependent parameter. The joint impact of SR, TR and QS can be modeled by the product of these three functions. The complete model correlates well with the subjective ratings with a Pearson Correlation Coefficient (PCC) of 0.99.
在本文中,我们研究了空间、时间和幅度分辨率(STAR)对压缩视频的感知质量的影响。在TI Zoom2移动开发平台(MDP)上进行主观质量测试。测试中包括7个源序列,对于每个源序列,我们有32个由JSVM编码器生成的测试配置(4个QP级别,5个空间分辨率和3个时间分辨率)和总共224个处理过的视频序列(pvs)。以不同空间分辨率编码的视频以移动平台的全屏尺寸显示。我们报告了空间分辨率(SR)、时间分辨率(TR)和量化步长(QS)对感知质量的影响,无论是单独的还是共同的。我们发现,SR、TR和QS的影响都可以通过一个具有单一内容相关参数的函数来捕获。SR、TR和QS的共同影响可以用这三个函数的乘积来建模。完整模型与主观评分具有良好的相关性,Pearson相关系数(PCC)为0.99。
{"title":"A perceptual video quality model for mobile platform considering impact of spatial, temporal, and amplitude resolutions","authors":"Yen-Fu Ou, Yuanyi Xue, Zhan Ma, Yao Wang","doi":"10.1109/IVMSPW.2011.5970365","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970365","url":null,"abstract":"In this paper, we investigate the impact of spatial, temporal and amplitude resolution (STAR) on the perceptual quality of a compressed video. Subjective quality tests were carried out on the TI Zoom2 mobile development platform (MDP). Seven source sequences are included in the tests and for each source sequence we have 32 test configurations generated by JSVM encoder (4 QP levels, 5 spatial resolutions, and 3 temporal resolutions) and a total of 224 processed video sequences (PVSs). Videos coded at different spatial resolutions are displayed at the full screen size of the mobile platform. We report the impact of the spatial resolution (SR), temporal resolution (TR) and quantization stepsize (QS) on the perceptual quality, individually as well as jointly. We found that the impact of SR, TR and QS can each be captured by a function with a single content-dependent parameter. The joint impact of SR, TR and QS can be modeled by the product of these three functions. The complete model correlates well with the subjective ratings with a Pearson Correlation Coefficient (PCC) of 0.99.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124355369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Study on distortion conspicuity in stereoscopically viewed 3D images 立体视觉三维图像畸变显著性的研究
Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970349
Ming-Jun Chen, A. Bovik, L. Cormack
We describe a study aimed towards increasing our understanding of the perception of distorted stereoscopic 3D images, by analyzing subjects' performance in locating local distortions in stereoscopically viewed images. Nineteen subjects were recruited for this study. The results indicated that contrast and range variations are correlated with the conspicuity of some distortions, but not others.
我们描述了一项旨在增加我们对扭曲立体3D图像感知的理解的研究,通过分析受试者在定位立体观看图像中的局部扭曲方面的表现。本研究招募了19名受试者。结果表明,对比度和距离变化与某些畸变的显著性相关,而与其他畸变的显著性无关。
{"title":"Study on distortion conspicuity in stereoscopically viewed 3D images","authors":"Ming-Jun Chen, A. Bovik, L. Cormack","doi":"10.1109/IVMSPW.2011.5970349","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970349","url":null,"abstract":"We describe a study aimed towards increasing our understanding of the perception of distorted stereoscopic 3D images, by analyzing subjects' performance in locating local distortions in stereoscopically viewed images. Nineteen subjects were recruited for this study. The results indicated that contrast and range variations are correlated with the conspicuity of some distortions, but not others.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122571868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Skewness balancing algorithm for approximation of discrete objects boundaries 离散物体边界近似的偏度平衡算法
Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970357
Y. Belkhouche, B. Buckles
Object boundary is an important feature for image processing and computer vision applications. In this paper a new method for extracting the non convex boundaries of an object represented by 2D point clouds is established. In order to determine the object boundaries we started by constructing the convex-hull-based Delaunay triangulation using the point clouds. Given the fact that the points are sampled from the object surface using an instrument such as cameras or laser scanners, the distribution of the edges lengths belonging to the objects follows a Gaussian distribution. However this distribution is skewed due to the existence of long edges introduced by the Delaunay triangulation. Removing the skewness will make the convex boundary built by the Delauny algorithm converge to the real boundary of the object. We tested our method using different datasets that includes synthetic data, urban LiDAR (Light Detection and Ranging) data, and binary images. The results show that the proposed method successfully extracts the object boundary.
物体边界是图像处理和计算机视觉应用的一个重要特征。本文提出了一种以二维点云为代表的物体非凸边界提取方法。为了确定物体边界,我们开始使用点云构造基于凸壳的Delaunay三角剖分。假设这些点是使用照相机或激光扫描仪等仪器从物体表面采样的,那么属于物体的边缘长度的分布遵循高斯分布。然而,由于Delaunay三角剖分法引入的长边存在,这种分布是倾斜的。去除偏度会使Delauny算法建立的凸边界收敛到物体的真实边界。我们使用不同的数据集测试了我们的方法,包括合成数据、城市激光雷达(光探测和测距)数据和二值图像。结果表明,该方法成功地提取了目标边界。
{"title":"Skewness balancing algorithm for approximation of discrete objects boundaries","authors":"Y. Belkhouche, B. Buckles","doi":"10.1109/IVMSPW.2011.5970357","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970357","url":null,"abstract":"Object boundary is an important feature for image processing and computer vision applications. In this paper a new method for extracting the non convex boundaries of an object represented by 2D point clouds is established. In order to determine the object boundaries we started by constructing the convex-hull-based Delaunay triangulation using the point clouds. Given the fact that the points are sampled from the object surface using an instrument such as cameras or laser scanners, the distribution of the edges lengths belonging to the objects follows a Gaussian distribution. However this distribution is skewed due to the existence of long edges introduced by the Delaunay triangulation. Removing the skewness will make the convex boundary built by the Delauny algorithm converge to the real boundary of the object. We tested our method using different datasets that includes synthetic data, urban LiDAR (Light Detection and Ranging) data, and binary images. The results show that the proposed method successfully extracts the object boundary.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":"358 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133238872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A new model for the visual judgement of kinship 亲属关系视觉判断的新模型
Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970358
Ludovica Lorusso, G. Brelstaff, E. Grosso
We analyse three different visual judgements of human faces: kinship, similarity and dissimilarity. The accepted model that represents the relation between similarity and kinship is one in which notional kinship signals are detected by the human observer in order to judge both kinship and similarity of pairs of faces. We measure observers' response times to face-pair stimuli while they perform judgements of kinship, similarity or dissimilarity. Significant differences in response time are found between the three different tasks and between face-pair categories - suggesting that the strategies employed when extracting information from facial features, may be conditional on both task and stimulus context. We sketch a new model for face processing related to these three judgements.
我们分析了三种不同的人脸视觉判断:亲缘关系、相似性和差异性。代表相似性和亲缘关系的公认模型是由人类观察者检测到的概念亲缘关系信号,以判断一对面孔的亲缘关系和相似性。我们测量了观察者在判断亲缘关系、相似或不相似时对面孔对刺激的反应时间。在三种不同的任务之间以及在面部对类别之间发现了显著的反应时间差异,这表明从面部特征中提取信息时所采用的策略可能取决于任务和刺激环境。我们提出了一个与这三种判断相关的人脸处理新模型。
{"title":"A new model for the visual judgement of kinship","authors":"Ludovica Lorusso, G. Brelstaff, E. Grosso","doi":"10.1109/IVMSPW.2011.5970358","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970358","url":null,"abstract":"We analyse three different visual judgements of human faces: kinship, similarity and dissimilarity. The accepted model that represents the relation between similarity and kinship is one in which notional kinship signals are detected by the human observer in order to judge both kinship and similarity of pairs of faces. We measure observers' response times to face-pair stimuli while they perform judgements of kinship, similarity or dissimilarity. Significant differences in response time are found between the three different tasks and between face-pair categories - suggesting that the strategies employed when extracting information from facial features, may be conditional on both task and stimulus context. We sketch a new model for face processing related to these three judgements.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124602329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fingerprint verification using characteristic vector based on planar graphics 基于平面图形特征向量的指纹验证
Pub Date : 1900-01-01 DOI: 10.1109/ivmspw.2011.5970360
R. M. Rodrigues, C. F. F. Costa Filho, M. G. F. Costa
This paper describes a new characteristic vector model for fingerprint representation that uses planar graph and triangulation algorithms. It is shown that this new characteristic vector model presents a better performance in a fingerprint identification system when compared with other vector models already proposed in literature. The minutiae extraction is an essential step of a fingerprint recognition system. In this paper is also presented a new method for minutiae extraction that explores the duality ridge ending/ridge bifurcation that exists when the skeleton image of a fingerprint is inverted. It is shown that this new extraction method simplifies the computational complexity of a fingerprint identification system.
本文提出了一种基于平面图形和三角剖分算法的指纹特征向量表示模型。结果表明,与文献中提出的其他特征向量模型相比,该特征向量模型在指纹识别系统中具有更好的性能。细节提取是指纹识别系统的重要环节。本文还提出了一种新的细节提取方法,该方法利用了指纹骨架图像倒转时存在的脊端/脊分叉的对偶性。实验表明,这种新的提取方法简化了指纹识别系统的计算复杂度。
{"title":"Fingerprint verification using characteristic vector based on planar graphics","authors":"R. M. Rodrigues, C. F. F. Costa Filho, M. G. F. Costa","doi":"10.1109/ivmspw.2011.5970360","DOIUrl":"https://doi.org/10.1109/ivmspw.2011.5970360","url":null,"abstract":"This paper describes a new characteristic vector model for fingerprint representation that uses planar graph and triangulation algorithms. It is shown that this new characteristic vector model presents a better performance in a fingerprint identification system when compared with other vector models already proposed in literature. The minutiae extraction is an essential step of a fingerprint recognition system. In this paper is also presented a new method for minutiae extraction that explores the duality ridge ending/ridge bifurcation that exists when the skeleton image of a fingerprint is inverted. It is shown that this new extraction method simplifies the computational complexity of a fingerprint identification system.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":"468 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123382112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1