首页 > 最新文献

35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)最新文献

英文 中文
Rapid Development of a Gunfire Detection Algorithm Using an Imagery Database 一种基于图像数据库的炮火检测算法的快速开发
Pub Date : 2006-10-11 DOI: 10.1109/AIPR.2006.31
William Seisler, N. Terry, E. Williams
Over the past few years, the Naval Research Laboratory (NRL) has been developing gunfire detection systems using infrared sensors. During the past year, the primary focus of this effort has been on algorithm performance improvements for gunfire detection from infrared imagery. A database of recordings of small arms fire and background clutter is being developed to allow lab testing of new algorithms. As the amount of data continues to grow, the testing analysis becomes lengthier. New tools and methods are being developed to reduce the post analysis time. Results of algorithm improvements for probability of detection and false alarm reduction through use of the database and tools will be presented.
在过去的几年中,海军研究实验室(NRL)一直在开发使用红外传感器的炮火探测系统。在过去的一年里,这项工作的主要重点是改进从红外图像中检测炮火的算法性能。正在开发一个小型武器射击和背景杂波记录数据库,以便对新算法进行实验室测试。随着数据量的不断增长,测试分析变得越来越长。正在开发新的工具和方法来减少后期分析时间。通过使用数据库和工具,将介绍算法改进的结果,以提高检测概率和减少误报。
{"title":"Rapid Development of a Gunfire Detection Algorithm Using an Imagery Database","authors":"William Seisler, N. Terry, E. Williams","doi":"10.1109/AIPR.2006.31","DOIUrl":"https://doi.org/10.1109/AIPR.2006.31","url":null,"abstract":"Over the past few years, the Naval Research Laboratory (NRL) has been developing gunfire detection systems using infrared sensors. During the past year, the primary focus of this effort has been on algorithm performance improvements for gunfire detection from infrared imagery. A database of recordings of small arms fire and background clutter is being developed to allow lab testing of new algorithms. As the amount of data continues to grow, the testing analysis becomes lengthier. New tools and methods are being developed to reduce the post analysis time. Results of algorithm improvements for probability of detection and false alarm reduction through use of the database and tools will be presented.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129913247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Model Analysis Geometry Imagery Correlation Tool Kit (MAGIC-TK) for Model Development and Image Analysis 模型分析几何图像相关工具包(MAGIC-TK)的模型开发和图像分析
Pub Date : 2006-10-11 DOI: 10.1109/AIPR.2006.26
T. Taczak, M. Rundquist, Colin P. Cahill
The application of IR signature prediction codes in DoD has been predominantly in two areas: 1.) the development of total signature requirements under a broad set of environmental and operational conditions and 2.) the evaluation of signatures of vessels and signature treatments to ensure the specifications are met. As computing power and IR scene generation techniques have advanced, simulation capabilities have evolved to scene injection into real hardware systems. To capture the real world effects required to accurately analyze search and track algorithms, the fidelity of the complete IR scene has required improvement. New validation methodologies are required to evaluate the accuracy of advanced IR scene generation models. This paper will review some of the approaches incorporated into a new model validation tool that will be able to verify model inputs and quantitatively evaluate differences between measured and predicted imagery.
红外特征预测代码在国防部的应用主要集中在两个领域:1)在广泛的环境和操作条件下制定总体特征要求;2)评估船舶特征和特征处理,以确保满足规范。随着计算能力和红外场景生成技术的进步,模拟能力已经发展到将场景注入到真实硬件系统中。为了捕捉真实世界的效果,需要准确地分析搜索和跟踪算法,完整的红外场景的保真度需要改进。需要新的验证方法来评估先进的红外场景生成模型的准确性。本文将回顾一些纳入新模型验证工具的方法,该工具将能够验证模型输入并定量评估测量图像和预测图像之间的差异。
{"title":"Model Analysis Geometry Imagery Correlation Tool Kit (MAGIC-TK) for Model Development and Image Analysis","authors":"T. Taczak, M. Rundquist, Colin P. Cahill","doi":"10.1109/AIPR.2006.26","DOIUrl":"https://doi.org/10.1109/AIPR.2006.26","url":null,"abstract":"The application of IR signature prediction codes in DoD has been predominantly in two areas: 1.) the development of total signature requirements under a broad set of environmental and operational conditions and 2.) the evaluation of signatures of vessels and signature treatments to ensure the specifications are met. As computing power and IR scene generation techniques have advanced, simulation capabilities have evolved to scene injection into real hardware systems. To capture the real world effects required to accurately analyze search and track algorithms, the fidelity of the complete IR scene has required improvement. New validation methodologies are required to evaluate the accuracy of advanced IR scene generation models. This paper will review some of the approaches incorporated into a new model validation tool that will be able to verify model inputs and quantitatively evaluate differences between measured and predicted imagery.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131441958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimation of Estuary Phytoplankton using a Web-based Tool for Visualization of Hyper-spectral Images 利用基于网络的高光谱图像可视化工具估算河口浮游植物
Pub Date : 2006-10-11 DOI: 10.1109/AIPR.2006.22
V. J. Alarcon, J. V. D. Zwaag, R. Moorhead
The development of Web-based tools for visualization and processing of hyper-spectral images has been slow. Memory and processing capabilities of personal computers may have precluded the development of Web-based tools. However, fast access to remote databases, increasing microprocessors' speed, and grid portals that provide interconnection between remote nodes sharing data and computing resources, make possible remote exploration and analysis of hyper-spectral data cubes. This paper presents a Web-based visualization tool for exploring moderate resolution imaging spectroradiometer (MODIS) data cubes. It provides capabilities for individual pixel's reflectance-spectra visualization, on-the-fly per-pixel calculation and visualization of chlorophyll-a andphytoplankton-carbon concentration values. The Web-based interface also generates normalized difference vegetation index images from the multi-spectral information contained in MODIS datasets. The tool is applied to estimate phytoplankton concentrations in the Saint Louis Bay estuary (Mississippi). Chlorophyll-a estimations produced by the Web-based tool compare well with in-situ measurements from a field survey performed during August 2001. Phytoplankton concentrations are calculated using those estimations of chlorophyll-a concentrations generated by the Web-based tool. The higher spatial resolution provided by the interface allowed estimating constituents concentrations at geographical locations near the coast.
基于web的高光谱图像可视化和处理工具的发展一直很缓慢。个人计算机的内存和处理能力可能阻碍了基于web的工具的开发。然而,对远程数据库的快速访问、微处理器速度的提高以及在共享数据和计算资源的远程节点之间提供互连的网格门户,使得对超光谱数据立方体的远程勘探和分析成为可能。本文提出了一个基于web的中分辨率成像光谱辐射计(MODIS)数据立方体可视化工具。它提供了单个像素的反射光谱可视化,实时逐像素计算以及叶绿素-a和浮游植物-碳浓度值的可视化功能。基于web的界面还可以根据MODIS数据集中包含的多光谱信息生成归一化差分植被指数图像。该工具被用于估计圣路易斯湾河口(密西西比)的浮游植物浓度。基于网络的工具估算的叶绿素-a值与2001年8月进行的实地调查的原位测量值比较良好。浮游植物的浓度是利用基于网络的工具产生的叶绿素-a浓度的估计值来计算的。界面提供的更高的空间分辨率可以估计海岸附近地理位置的成分浓度。
{"title":"Estimation of Estuary Phytoplankton using a Web-based Tool for Visualization of Hyper-spectral Images","authors":"V. J. Alarcon, J. V. D. Zwaag, R. Moorhead","doi":"10.1109/AIPR.2006.22","DOIUrl":"https://doi.org/10.1109/AIPR.2006.22","url":null,"abstract":"The development of Web-based tools for visualization and processing of hyper-spectral images has been slow. Memory and processing capabilities of personal computers may have precluded the development of Web-based tools. However, fast access to remote databases, increasing microprocessors' speed, and grid portals that provide interconnection between remote nodes sharing data and computing resources, make possible remote exploration and analysis of hyper-spectral data cubes. This paper presents a Web-based visualization tool for exploring moderate resolution imaging spectroradiometer (MODIS) data cubes. It provides capabilities for individual pixel's reflectance-spectra visualization, on-the-fly per-pixel calculation and visualization of chlorophyll-a andphytoplankton-carbon concentration values. The Web-based interface also generates normalized difference vegetation index images from the multi-spectral information contained in MODIS datasets. The tool is applied to estimate phytoplankton concentrations in the Saint Louis Bay estuary (Mississippi). Chlorophyll-a estimations produced by the Web-based tool compare well with in-situ measurements from a field survey performed during August 2001. Phytoplankton concentrations are calculated using those estimations of chlorophyll-a concentrations generated by the Web-based tool. The higher spatial resolution provided by the interface allowed estimating constituents concentrations at geographical locations near the coast.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126431959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Gabor Wavelet Based Modular PCA Approach for Expression and Illumination Invariant Face Recognition 基于Gabor小波的模主成分分析方法用于表情和光照不变人脸识别
Pub Date : 2006-10-01 DOI: 10.1109/AIPR.2006.24
Neeharika Gudur, V. Asari
A Gabor wavelet based modular PCA approach for face recognition is proposed in this paper. The proposed technique improves the efficiency of face recognition, under varying illumination and expression conditions for face images when compared to traditional PCA techniques. In this algorithm the face images are divided into smaller sub-images called modules and a series of Gabor wavelets at different scales and orientations are applied on these localized modules for feature extraction. A modified PCA approach is then applied for dimensionality reduction. Due to the extraction of localized features using Gabor wavelets, the proposed algorithm is expected to give improved recognition rate when compared to other traditional techniques. The performance of the proposed technique is evaluated under conditions of varying illumination, expression and variation in pose up to a certain range using standard face databases.
提出了一种基于Gabor小波的模块化PCA人脸识别方法。与传统的PCA技术相比,该技术提高了人脸图像在不同光照和表情条件下的识别效率。该算法将人脸图像划分为更小的子图像模块,并在这些局部化的模块上应用一系列不同尺度和方向的Gabor小波进行特征提取。然后应用改进的PCA方法进行降维。由于使用Gabor小波提取局部特征,与其他传统技术相比,该算法有望提高识别率。利用标准人脸数据库,在一定范围内的光照、表情和姿态变化条件下,评估了该技术的性能。
{"title":"Gabor Wavelet Based Modular PCA Approach for Expression and Illumination Invariant Face Recognition","authors":"Neeharika Gudur, V. Asari","doi":"10.1109/AIPR.2006.24","DOIUrl":"https://doi.org/10.1109/AIPR.2006.24","url":null,"abstract":"A Gabor wavelet based modular PCA approach for face recognition is proposed in this paper. The proposed technique improves the efficiency of face recognition, under varying illumination and expression conditions for face images when compared to traditional PCA techniques. In this algorithm the face images are divided into smaller sub-images called modules and a series of Gabor wavelets at different scales and orientations are applied on these localized modules for feature extraction. A modified PCA approach is then applied for dimensionality reduction. Due to the extraction of localized features using Gabor wavelets, the proposed algorithm is expected to give improved recognition rate when compared to other traditional techniques. The performance of the proposed technique is evaluated under conditions of varying illumination, expression and variation in pose up to a certain range using standard face databases.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127889704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Real-Time 3D Ladar Imaging 实时三维雷达成像
Pub Date : 2006-05-05 DOI: 10.1117/12.664904
P. Cho, H. Anderson, R. Hatch, P. Ramaswami
A prototype image processing system has recently been developed which generates, displays and analyzes three-dimensional ladar data in real time. It is based upon a suite of novel algorithms that transform raw ladar data into cleaned 3D images. These algorithms perform noise reduction, ground plane identification, detector response deconvolution and illumination pattern renormalization. The system also discriminates static from dynamic objects in a scene. In order to achieve real-time throughput, we have parallelized these algorithms on a Linux cluster. We demonstrate that multiprocessor software plus Blade hardware result in a compact, real-time imagery generation adjunct to an operating ladar.
最近开发了一个原型图像处理系统,可以实时生成、显示和分析三维雷达数据。它基于一套新颖的算法,将原始雷达数据转换为经过清理的3D图像。这些算法执行降噪,地平面识别,检测器响应反褶积和照明模式重整化。该系统还能区分场景中的静态和动态物体。为了实现实时吞吐量,我们在Linux集群上并行化了这些算法。我们证明了多处理器软件加上Blade硬件可以为操作雷达提供紧凑的实时图像生成辅助设备。
{"title":"Real-Time 3D Ladar Imaging","authors":"P. Cho, H. Anderson, R. Hatch, P. Ramaswami","doi":"10.1117/12.664904","DOIUrl":"https://doi.org/10.1117/12.664904","url":null,"abstract":"A prototype image processing system has recently been developed which generates, displays and analyzes three-dimensional ladar data in real time. It is based upon a suite of novel algorithms that transform raw ladar data into cleaned 3D images. These algorithms perform noise reduction, ground plane identification, detector response deconvolution and illumination pattern renormalization. The system also discriminates static from dynamic objects in a scene. In order to achieve real-time throughput, we have parallelized these algorithms on a Linux cluster. We demonstrate that multiprocessor software plus Blade hardware result in a compact, real-time imagery generation adjunct to an operating ladar.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123746503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
An Image Metric-Based ATR Performance Prediction Testbed 基于图像度量的ATR性能预测试验台
Pub Date : 2005-10-19 DOI: 10.1109/AIPR.2006.13
Scott K. Ralph, J. Irvine, M. Snorrason, Steve Vanstone
Automatic target detection (ATD) systems process imagery to detect and locate targets in imagery in support of a variety of military missions. Accurate prediction of ATD performance would assist in system design and trade studies, collection management, and mission planning. A need exists for ATD performance prediction based exclusively on information available from the imagery and its associated metadata. We present a predictor based on image measures quantifying the intrinsic ATD difficulty on an image. The modeling effort consists of two phases: a learning phase, where image measures are computed for a set of test images, the ATD performance is measured, and a prediction model is developed; and a second phase to test and validate performance prediction. The learning phase produces a mapping, valid across various ATR algorithms, which is even applicable when no image truth is avail-able (e.g., when evaluating denied area imagery). The testbed has plug-in capability to allow rapid evaluation of new ATR algorithms. The image measures employed in the model include: statistics derived from a constant false alarm rate (CFAR) processor, the power spectrum signature, and others. We present a performance predictor using a trained classifier ATD that was constructed using GENIE, a tool developed at Los Alamos National Laboratory. The paper concludes with a discussion of future research.
自动目标探测(ATD)系统处理图像以探测和定位图像中的目标,支持各种军事任务。准确预测ATD性能将有助于系统设计和贸易研究、收集管理和任务规划。需要完全基于图像及其相关元数据提供的信息进行ATD性能预测。我们提出了一种基于图像度量的预测器,用于量化图像上固有的ATD难度。建模工作包括两个阶段:学习阶段,计算一组测试图像的图像度量,测量ATD性能,并建立预测模型;第二阶段是测试和验证性能预测。学习阶段生成映射,在各种ATR算法中有效,甚至适用于没有图像真值可用的情况(例如,评估拒绝区域图像时)。测试平台具有插件功能,可以快速评估新的ATR算法。模型中采用的图像度量包括:恒定虚警率(CFAR)处理器的统计数据、功率谱签名等。我们提出了一个使用经过训练的分类器ATD的性能预测器,该分类器是使用洛斯阿拉莫斯国家实验室开发的GENIE工具构建的。文章最后对未来的研究进行了展望。
{"title":"An Image Metric-Based ATR Performance Prediction Testbed","authors":"Scott K. Ralph, J. Irvine, M. Snorrason, Steve Vanstone","doi":"10.1109/AIPR.2006.13","DOIUrl":"https://doi.org/10.1109/AIPR.2006.13","url":null,"abstract":"Automatic target detection (ATD) systems process imagery to detect and locate targets in imagery in support of a variety of military missions. Accurate prediction of ATD performance would assist in system design and trade studies, collection management, and mission planning. A need exists for ATD performance prediction based exclusively on information available from the imagery and its associated metadata. We present a predictor based on image measures quantifying the intrinsic ATD difficulty on an image. The modeling effort consists of two phases: a learning phase, where image measures are computed for a set of test images, the ATD performance is measured, and a prediction model is developed; and a second phase to test and validate performance prediction. The learning phase produces a mapping, valid across various ATR algorithms, which is even applicable when no image truth is avail-able (e.g., when evaluating denied area imagery). The testbed has plug-in capability to allow rapid evaluation of new ATR algorithms. The image measures employed in the model include: statistics derived from a constant false alarm rate (CFAR) processor, the power spectrum signature, and others. We present a performance predictor using a trained classifier ATD that was constructed using GENIE, a tool developed at Los Alamos National Laboratory. The paper concludes with a discussion of future research.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128506896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
期刊
35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1