首页 > 最新文献

2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW)最新文献

英文 中文
MetaMax: Improved Open-Set Deep Neural Networks via Weibull Calibration MetaMax:基于威布尔校准的改进开集深度神经网络
Pub Date : 2022-11-20 DOI: 10.1109/WACVW58289.2023.00048
Zongyao Lyu, Nolan B. Gutierrez, William J. Beksi
Open-set recognition refers to the problem in which classes that were not seen during training appear at inference time. This requires the ability to identify instances of novel classes while maintaining discriminative capability for closed-set classification. OpenMax was the first deep neural network-based approach to address open-set recognition by calibrating the predictive scores of a standard closed-set classification network. In this paper we present MetaMax, a more effective post-processing technique that improves upon contemporary methods by directly modeling class activation vectors. MetaMax removes the need for computing class mean activation vectors (MAVs) and distances between a query image and a class MAV as required in OpenMax. Experimental results show that MetaMax outperforms OpenMax and is comparable in performance to other state-of-the-art approaches.
开集识别是指在训练中没有看到的类在推理时出现的问题。这需要能够识别新类的实例,同时保持闭集分类的判别能力。OpenMax是第一个基于深度神经网络的方法,通过校准标准闭集分类网络的预测分数来解决开放集识别问题。在本文中,我们提出了MetaMax,这是一种更有效的后处理技术,通过直接建模类激活向量来改进当代方法。MetaMax不需要计算类平均激活向量(MAV)和查询图像与类MAV之间的距离,这在OpenMax中是必需的。实验结果表明,MetaMax优于OpenMax,在性能上可与其他最先进的方法相媲美。
{"title":"MetaMax: Improved Open-Set Deep Neural Networks via Weibull Calibration","authors":"Zongyao Lyu, Nolan B. Gutierrez, William J. Beksi","doi":"10.1109/WACVW58289.2023.00048","DOIUrl":"https://doi.org/10.1109/WACVW58289.2023.00048","url":null,"abstract":"Open-set recognition refers to the problem in which classes that were not seen during training appear at inference time. This requires the ability to identify instances of novel classes while maintaining discriminative capability for closed-set classification. OpenMax was the first deep neural network-based approach to address open-set recognition by calibrating the predictive scores of a standard closed-set classification network. In this paper we present MetaMax, a more effective post-processing technique that improves upon contemporary methods by directly modeling class activation vectors. MetaMax removes the need for computing class mean activation vectors (MAVs) and distances between a query image and a class MAV as required in OpenMax. Experimental results show that MetaMax outperforms OpenMax and is comparable in performance to other state-of-the-art approaches.","PeriodicalId":306545,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW)","volume":"13 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128865537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mixture Domain Adaptation to Improve Semantic Segmentation in Real-World Surveillance 混合域自适应改进现实世界监控中的语义分割
Pub Date : 2022-11-18 DOI: 10.1109/WACVW58289.2023.00007
S'ebastien Pi'erard, A. Cioppa, Anaïs Halin, Renaud Vandeghen, Maxime Zanella, B. Macq, S. Mahmoudi, Marc Van Droogenbroeck
Various tasks encountered in real-world surveillance can be addressed by determining posteriors (e.g. by Bayesian inference or machine learning), based on which critical decisions must be taken. However, the surveillance domain (acquisition device, operating conditions, etc.) is often unknown, which prevents any possibility of scene-specific optimization. In this paper, we define a probabilistic framework and present a formal proof of an algorithm for the unsupervised many-to-infinity domain adaptation of posteriors. Our proposed algorithm is applicable when the probability measure associated with the target domain is a convex combination of the probability measures of the source domains. It makes use of source models and a domain discriminator model trained off-line to compute posteriors adapted on the fly to the target domain. Finally, we show the effectiveness of our algorithm for the task of semantic segmentation in real-world surveillance. The code is publicly available at https://github.com/rvandeghen/MDA.
现实世界监控中遇到的各种任务可以通过确定后验来解决(例如通过贝叶斯推理或机器学习),必须在此基础上做出关键决策。然而,监控领域(采集设备,操作条件等)通常是未知的,这阻碍了任何场景特定优化的可能性。在本文中,我们定义了一个概率框架,并给出了一种后验的无监督多到无穷域自适应算法的形式化证明。当与目标域相关的概率测度是源域概率测度的凸组合时,本文提出的算法是适用的。它利用源模型和离线训练的域鉴别器模型来计算动态适应目标域的后验。最后,我们展示了我们的算法在现实世界监控中语义分割任务的有效性。该代码可在https://github.com/rvandeghen/MDA上公开获得。
{"title":"Mixture Domain Adaptation to Improve Semantic Segmentation in Real-World Surveillance","authors":"S'ebastien Pi'erard, A. Cioppa, Anaïs Halin, Renaud Vandeghen, Maxime Zanella, B. Macq, S. Mahmoudi, Marc Van Droogenbroeck","doi":"10.1109/WACVW58289.2023.00007","DOIUrl":"https://doi.org/10.1109/WACVW58289.2023.00007","url":null,"abstract":"Various tasks encountered in real-world surveillance can be addressed by determining posteriors (e.g. by Bayesian inference or machine learning), based on which critical decisions must be taken. However, the surveillance domain (acquisition device, operating conditions, etc.) is often unknown, which prevents any possibility of scene-specific optimization. In this paper, we define a probabilistic framework and present a formal proof of an algorithm for the unsupervised many-to-infinity domain adaptation of posteriors. Our proposed algorithm is applicable when the probability measure associated with the target domain is a convex combination of the probability measures of the source domains. It makes use of source models and a domain discriminator model trained off-line to compute posteriors adapted on the fly to the target domain. Finally, we show the effectiveness of our algorithm for the task of semantic segmentation in real-world surveillance. The code is publicly available at https://github.com/rvandeghen/MDA.","PeriodicalId":306545,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116606083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Detecting Arbitrary Keypoints on Limbs and Skis with Sparse Partly Correct Segmentation Masks 利用稀疏部分正确分割蒙版检测四肢和滑雪板上的任意关键点
Pub Date : 2022-11-17 DOI: 10.1109/WACVW58289.2023.00051
K. Ludwig, Daniel Kienzle, Julian Lorenz, R. Lienhart
Analyses based on the body posture are crucial for top-class athletes in many sports disciplines. If at all, coaches label only the most important keypoints, since manual annotations are very costly. This paper proposes a method to detect arbitrary keypoints on the limbs and skis of professional ski jumpers that requires a few, only partly correct segmentation masks during training. Our model is based on the Vision Transformer architecture with a special design for the input tokens to query for the desired keypoints. Since we use segmentation masks only to generate ground truth labels for the freely selectable keypoints, partly correct segmentation masks are sufficient for our training procedure. Hence, there is no need for costly hand-annotated segmentation masks. We analyze different training techniques for freely selected and standard keypoints, including pseudo labels, and show in our experiments that only a few partly correct segmentation masks are sufficient for learning to detect arbitrary keypoints on limbs and skis.
在许多体育项目中,基于身体姿势的分析对顶级运动员来说至关重要。如果有的话,教练只标记最重要的关键点,因为手动注释的成本非常高。本文提出了一种检测专业跳台滑雪运动员肢体和滑雪板上任意关键点的方法,该方法在训练过程中只需要少量且部分正确的分割掩码。我们的模型基于Vision Transformer体系结构,该体系结构具有用于查询所需关键点的输入令牌的特殊设计。由于我们使用分割掩码仅为自由选择的关键点生成基础真值标签,因此部分正确的分割掩码足以用于我们的训练过程。因此,不需要昂贵的手工注释分割掩码。我们分析了自由选择和标准关键点的不同训练技术,包括伪标签,并在我们的实验中表明,只有少数部分正确的分割掩码足以学习检测肢体和滑雪板上的任意关键点。
{"title":"Detecting Arbitrary Keypoints on Limbs and Skis with Sparse Partly Correct Segmentation Masks","authors":"K. Ludwig, Daniel Kienzle, Julian Lorenz, R. Lienhart","doi":"10.1109/WACVW58289.2023.00051","DOIUrl":"https://doi.org/10.1109/WACVW58289.2023.00051","url":null,"abstract":"Analyses based on the body posture are crucial for top-class athletes in many sports disciplines. If at all, coaches label only the most important keypoints, since manual annotations are very costly. This paper proposes a method to detect arbitrary keypoints on the limbs and skis of professional ski jumpers that requires a few, only partly correct segmentation masks during training. Our model is based on the Vision Transformer architecture with a special design for the input tokens to query for the desired keypoints. Since we use segmentation masks only to generate ground truth labels for the freely selectable keypoints, partly correct segmentation masks are sufficient for our training procedure. Hence, there is no need for costly hand-annotated segmentation masks. We analyze different training techniques for freely selected and standard keypoints, including pseudo labels, and show in our experiments that only a few partly correct segmentation masks are sufficient for learning to detect arbitrary keypoints on limbs and skis.","PeriodicalId":306545,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114097162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Expanding Accurate Person Recognition to New Altitudes and Ranges: The BRIAR Dataset 将准确的人识别扩展到新的高度和范围:BRIAR数据集
Pub Date : 2022-11-03 DOI: 10.1109/WACVW58289.2023.00066
David Cornett, Joel Brogan, Nell Barber, D. Aykac, Seth T. Baird, Nick Burchfield, Carl Dukes, Andrew M. Duncan, R. Ferrell, Jim Goddard, Gavin Jager, Matt Larson, Bart Murphy, Christi Johnson, Ian Shelley, Nisha Srinivas, Brandon Stockwell, Leanne Thompson, Matt Yohe, Robert Zhang, S. Dolvin, H. Santos-Villalobos, D. Bolme
Face recognition technology has advanced significantly in recent years due largely to the availability of large and increasingly complex training datasets for use in deep learning models. These datasets, however, typically comprise images scraped from news sites or social media plat-forms and, therefore, have limited utility in more advanced security, forensics, and military applications. These applications require lower resolution, longer ranges, and ele-vated viewpoints. To meet these critical needs, we collected and curated the first and second subsets of a large multi-modal biometric dataset designed for use in the research and development (R&D) of biometric recognition technolo-gies under extremely challenging conditions. Thus far, the dataset includes more than 350,000 still images and over 1,300 hours of video footage of approximately 1,000 sub-jects. To collect this data, we used Nikon DSLR cameras, a variety of commercial surveillance cameras, specialized long-rage R&D cameras, and Group 1 and Group 2 UAV platforms. The goal is to support the development of algorithms capable of accurately recognizing people at ranges up to 1,000 m and from high angles of elevation. These ad-vances will include improvements to the state of the art in face recognition and will support new research in the area of whole-body recognition using methods based on gait and anthropometry. This paper describes methods used to col-lect and curate the dataset, and the dataset's characteristics at the current stage.
近年来,人脸识别技术取得了显著进步,这主要是由于深度学习模型中使用的大型和日益复杂的训练数据集的可用性。然而,这些数据集通常包含从新闻网站或社交媒体平台上抓取的图像,因此在更高级的安全、取证和军事应用中效用有限。这些应用程序需要较低的分辨率、较长的范围和较高的视点。为了满足这些关键需求,我们收集并整理了一个大型多模态生物特征数据集的第一和第二子集,该数据集旨在在极具挑战性的条件下用于生物特征识别技术的研发(R&D)。到目前为止,该数据集包括超过35万张静态图像和超过1300小时的视频片段,涉及大约1000个主题。为了收集这些数据,我们使用了尼康数码单反相机、各种商用监控相机、专用长程研发相机以及第一类和第二类无人机平台。目标是支持能够在1000米范围内从高海拔角度准确识别人员的算法的开发。这些进步将包括面部识别技术的进步,并将支持使用基于步态和人体测量学的方法进行全身识别领域的新研究。本文介绍了收集和整理数据集的方法,以及现阶段数据集的特点。
{"title":"Expanding Accurate Person Recognition to New Altitudes and Ranges: The BRIAR Dataset","authors":"David Cornett, Joel Brogan, Nell Barber, D. Aykac, Seth T. Baird, Nick Burchfield, Carl Dukes, Andrew M. Duncan, R. Ferrell, Jim Goddard, Gavin Jager, Matt Larson, Bart Murphy, Christi Johnson, Ian Shelley, Nisha Srinivas, Brandon Stockwell, Leanne Thompson, Matt Yohe, Robert Zhang, S. Dolvin, H. Santos-Villalobos, D. Bolme","doi":"10.1109/WACVW58289.2023.00066","DOIUrl":"https://doi.org/10.1109/WACVW58289.2023.00066","url":null,"abstract":"Face recognition technology has advanced significantly in recent years due largely to the availability of large and increasingly complex training datasets for use in deep learning models. These datasets, however, typically comprise images scraped from news sites or social media plat-forms and, therefore, have limited utility in more advanced security, forensics, and military applications. These applications require lower resolution, longer ranges, and ele-vated viewpoints. To meet these critical needs, we collected and curated the first and second subsets of a large multi-modal biometric dataset designed for use in the research and development (R&D) of biometric recognition technolo-gies under extremely challenging conditions. Thus far, the dataset includes more than 350,000 still images and over 1,300 hours of video footage of approximately 1,000 sub-jects. To collect this data, we used Nikon DSLR cameras, a variety of commercial surveillance cameras, specialized long-rage R&D cameras, and Group 1 and Group 2 UAV platforms. The goal is to support the development of algorithms capable of accurately recognizing people at ranges up to 1,000 m and from high angles of elevation. These ad-vances will include improvements to the state of the art in face recognition and will support new research in the area of whole-body recognition using methods based on gait and anthropometry. This paper describes methods used to col-lect and curate the dataset, and the dataset's characteristics at the current stage.","PeriodicalId":306545,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121629755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
SeaDroneSim: Simulation of Aerial Images for Detection of Objects Above Water SeaDroneSim:用于检测水面上物体的航空图像模拟
Pub Date : 2022-10-26 DOI: 10.1109/WACVW58289.2023.00027
Xiao-sheng Lin, Cheng Liu, Miao Yu, Y. Aloimonos
Unmanned Aerial Vehicles (UAVs) are known for their speed and versatility in collecting aerial images and remote sensing for land use surveys and precision agriculture. With UAVs' growth in availability and accessibility, they are now of vital importance as technological support in marine-based applications such as vessel monitoring and search-and-rescue (SAR) operations. High-resolution cameras and Graphic processing units (GPUs) can be equipped on the UAVs to effectively and efficiently aid in locating objects of interest, lending themselves to emergency rescue operations or, in our case, precision aquaculture applications. Modern computer vision algorithms allow us to detect objects of interest in a dynamic environment; however, these algorithms are dependent on large training datasets collected from UAVs, which are currently time-consuming and labor-intensive to collect for maritime environments. To this end, we present a new benchmark suite, SeaD-roneSim, that can be used to create photo-realistic aerial image datasets with ground truth for segmentation masks of any given object. Utilizing only the synthetic data gen-erated from SeaDroneSim, we obtained 71 a mean Average Precision (mAP) on real aerial images for detecting our ob-ject of interest, a popular, open source, remotely operated underwater vehicle (BlueROV) in this feasibility study. The results of this new simulation suit serve as a baseline for the detection of the BlueROV, which can be used in underwater surveys of oyster reefs and other marine applications.
无人驾驶飞行器(uav)以其收集航空图像和遥感用于土地利用调查和精准农业的速度和多功能性而闻名。随着无人机在可用性和可及性方面的增长,它们现在在船舶监控和搜救(SAR)行动等海上应用中作为技术支持至关重要。高分辨率摄像机和图形处理单元(gpu)可以装备在无人机上,以有效和高效地帮助定位感兴趣的物体,为紧急救援行动提供帮助,或者在我们的情况下,用于精确水产养殖应用。现代计算机视觉算法使我们能够在动态环境中检测感兴趣的物体;然而,这些算法依赖于从无人机收集的大型训练数据集,目前在海洋环境中收集这些数据既耗时又费力。为此,我们提出了一个新的基准套件,SeaD-roneSim,可用于创建具有地面真实度的照片逼真的航空图像数据集,用于任何给定对象的分割掩模。仅利用SeaDroneSim生成的合成数据,我们在真实航空图像上获得了71的平均平均精度(mAP),用于检测我们感兴趣的目标,这是一种流行的开源远程操作水下航行器(BlueROV)。这种新型模拟服的结果可作为BlueROV探测的基线,BlueROV可用于牡蛎礁的水下调查和其他海洋应用。
{"title":"SeaDroneSim: Simulation of Aerial Images for Detection of Objects Above Water","authors":"Xiao-sheng Lin, Cheng Liu, Miao Yu, Y. Aloimonos","doi":"10.1109/WACVW58289.2023.00027","DOIUrl":"https://doi.org/10.1109/WACVW58289.2023.00027","url":null,"abstract":"Unmanned Aerial Vehicles (UAVs) are known for their speed and versatility in collecting aerial images and remote sensing for land use surveys and precision agriculture. With UAVs' growth in availability and accessibility, they are now of vital importance as technological support in marine-based applications such as vessel monitoring and search-and-rescue (SAR) operations. High-resolution cameras and Graphic processing units (GPUs) can be equipped on the UAVs to effectively and efficiently aid in locating objects of interest, lending themselves to emergency rescue operations or, in our case, precision aquaculture applications. Modern computer vision algorithms allow us to detect objects of interest in a dynamic environment; however, these algorithms are dependent on large training datasets collected from UAVs, which are currently time-consuming and labor-intensive to collect for maritime environments. To this end, we present a new benchmark suite, SeaD-roneSim, that can be used to create photo-realistic aerial image datasets with ground truth for segmentation masks of any given object. Utilizing only the synthetic data gen-erated from SeaDroneSim, we obtained 71 a mean Average Precision (mAP) on real aerial images for detecting our ob-ject of interest, a popular, open source, remotely operated underwater vehicle (BlueROV) in this feasibility study. The results of this new simulation suit serve as a baseline for the detection of the BlueROV, which can be used in underwater surveys of oyster reefs and other marine applications.","PeriodicalId":306545,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124772991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Discriminative Sampling of Proposals in Self-Supervised Transformers for Weakly Supervised Object Localization 自监督变压器弱监督目标定位方案的判别抽样
Pub Date : 2022-09-09 DOI: 10.1109/WACVW58289.2023.00021
Shakeeb Murtaza, Soufiane Belharbi, M. Pedersoli, Aydin Sarraf, Eric Granger
Drones are employed in a growing number of visual recognition applications. A recent development in cell tower inspection is drone-based asset surveillance, where the autonomous flight of a drone is guided by localizing objects of interest in successive aerial images. In this paper, we propose a method to train deep weakly-supervised object localization (WSOL) models based only on image-class labels to locate object with high confidence. To train our localizer, pseudo labels are efficiently harvested from a self-supervised vision transformers (SSTs). However, since SSTs decompose the scene into multiple maps containing various object parts, and do not rely on any explicit super-visory signal, they cannot distinguish between the object of interest and other objects, as required WSOL. To address this issue, we propose leveraging the multiple maps generated by the different transformer heads to acquire pseudo-labels for training a deep WSOL model. In particular, a new Discriminative Proposals Sampling (DiPS) method is introduced that relies on a CNN classifier to identify discriminative regions. Then, foreground and background pixels are sampled from these regions in order to train a WSOL model for generating activation maps that can accurately localize objects belonging to a specific class. Empirical results11Our code is available: https://github.com/shakeebmurtaza/dips on the challenging TelDrone dataset indicate that our proposed approach can outperform state-of-art methods over a wide range of threshold values over produced maps. We also computed results on CUB dataset, showing that our method can be adapted for other tasks.
无人机被用于越来越多的视觉识别应用。蜂窝塔检查的最新发展是基于无人机的资产监视,其中无人机的自主飞行是通过在连续的航空图像中定位感兴趣的物体来引导的。本文提出了一种仅基于图像类标签训练深度弱监督目标定位(WSOL)模型的方法,以实现高置信度的目标定位。为了训练我们的定位器,从自监督视觉变压器(SSTs)中有效地获取伪标签。然而,由于SSTs将场景分解为包含各种对象部分的多个映射,并且不依赖于任何显式的监督信号,因此它们无法像WSOL要求的那样区分感兴趣的对象和其他对象。为了解决这个问题,我们建议利用不同转换头生成的多个映射来获取伪标签,以训练深度WSOL模型。特别地,引入了一种新的判别建议采样(DiPS)方法,该方法依赖于CNN分类器来识别判别区域。然后,从这些区域中采样前景和背景像素,以训练WSOL模型,生成能够准确定位属于特定类的对象的激活图。经验结果11我们的代码是可用的:https://github.com/shakeebmurtaza/dips在具有挑战性的TelDrone数据集上表明,我们提出的方法可以在生成的地图的大范围阈值上优于最先进的方法。我们还对CUB数据集进行了计算,结果表明我们的方法可以适用于其他任务。
{"title":"Discriminative Sampling of Proposals in Self-Supervised Transformers for Weakly Supervised Object Localization","authors":"Shakeeb Murtaza, Soufiane Belharbi, M. Pedersoli, Aydin Sarraf, Eric Granger","doi":"10.1109/WACVW58289.2023.00021","DOIUrl":"https://doi.org/10.1109/WACVW58289.2023.00021","url":null,"abstract":"Drones are employed in a growing number of visual recognition applications. A recent development in cell tower inspection is drone-based asset surveillance, where the autonomous flight of a drone is guided by localizing objects of interest in successive aerial images. In this paper, we propose a method to train deep weakly-supervised object localization (WSOL) models based only on image-class labels to locate object with high confidence. To train our localizer, pseudo labels are efficiently harvested from a self-supervised vision transformers (SSTs). However, since SSTs decompose the scene into multiple maps containing various object parts, and do not rely on any explicit super-visory signal, they cannot distinguish between the object of interest and other objects, as required WSOL. To address this issue, we propose leveraging the multiple maps generated by the different transformer heads to acquire pseudo-labels for training a deep WSOL model. In particular, a new Discriminative Proposals Sampling (DiPS) method is introduced that relies on a CNN classifier to identify discriminative regions. Then, foreground and background pixels are sampled from these regions in order to train a WSOL model for generating activation maps that can accurately localize objects belonging to a specific class. Empirical results11Our code is available: https://github.com/shakeebmurtaza/dips on the challenging TelDrone dataset indicate that our proposed approach can outperform state-of-art methods over a wide range of threshold values over produced maps. We also computed results on CUB dataset, showing that our method can be adapted for other tasks.","PeriodicalId":306545,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126737735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Synthehicle: Multi-Vehicle Multi-Camera Tracking in Virtual Cities 合成车辆:虚拟城市中的多车辆多摄像头跟踪
Pub Date : 2022-08-30 DOI: 10.1109/WACVW58289.2023.00005
Fabian Herzog, Jun-Liang Chen, Torben Teepe, Johannes Gilg, S. Hörmann, G. Rigoll
Smart City applications such as intelligent traffic routing, accident prevention or vehicle surveillance rely on computer vision methods for exact vehicle localization and tracking. Privacy issues make collecting real data difficult, and labeling data is a time-consuming and costly process. Due to the scarcity of accurately labeled data, detecting and tracking vehicles in 3D from multiple cameras proves challenging to explore. We present a massive synthetic dataset for multiple vehicle tracking and segmentation in multiple overlapping and non-overlapping camera views. Unlike existing datasets, which only provide tracking ground truth for 2D bounding boxes, our dataset additionally contains perfect labels for 3D bounding boxes in camera- and world coordinates, depth estimation, and instance, semantic and panoptic segmentation. The dataset consists of 17 hours of labeled video material, recorded from 340 cameras in 64 diverse day, rain, dawn, and night scenes, making it the most extensive dataset for multi-target multi-camera tracking so far. We provide baselines for detection, vehicle re-identification, and single- and multi-camera tracking. Code and data are publicly available. 11Code and data: https://github.com/fubel/synthehicle
智能交通路线、事故预防或车辆监控等智能城市应用依赖于计算机视觉方法来精确定位和跟踪车辆。隐私问题使得收集真实数据变得困难,而标记数据是一个耗时且昂贵的过程。由于缺乏精确标记的数据,从多个摄像头中检测和跟踪3D车辆是一项具有挑战性的探索。我们提出了一个大规模的合成数据集,用于多个重叠和非重叠相机视图下的多车辆跟踪和分割。与现有数据集不同,现有数据集仅为2D边界框提供跟踪地面真相,我们的数据集还包含相机和世界坐标、深度估计以及实例、语义和全景分割等3D边界框的完美标签。该数据集由340台摄像机在64个不同的白天、下雨、黎明和夜间场景中记录的17小时标记视频材料组成,使其成为迄今为止多目标多摄像机跟踪最广泛的数据集。我们为检测、车辆重新识别以及单摄像头和多摄像头跟踪提供基线。代码和数据是公开的。代码和数据:https://github.com/fubel/synthehicle
{"title":"Synthehicle: Multi-Vehicle Multi-Camera Tracking in Virtual Cities","authors":"Fabian Herzog, Jun-Liang Chen, Torben Teepe, Johannes Gilg, S. Hörmann, G. Rigoll","doi":"10.1109/WACVW58289.2023.00005","DOIUrl":"https://doi.org/10.1109/WACVW58289.2023.00005","url":null,"abstract":"Smart City applications such as intelligent traffic routing, accident prevention or vehicle surveillance rely on computer vision methods for exact vehicle localization and tracking. Privacy issues make collecting real data difficult, and labeling data is a time-consuming and costly process. Due to the scarcity of accurately labeled data, detecting and tracking vehicles in 3D from multiple cameras proves challenging to explore. We present a massive synthetic dataset for multiple vehicle tracking and segmentation in multiple overlapping and non-overlapping camera views. Unlike existing datasets, which only provide tracking ground truth for 2D bounding boxes, our dataset additionally contains perfect labels for 3D bounding boxes in camera- and world coordinates, depth estimation, and instance, semantic and panoptic segmentation. The dataset consists of 17 hours of labeled video material, recorded from 340 cameras in 64 diverse day, rain, dawn, and night scenes, making it the most extensive dataset for multi-target multi-camera tracking so far. We provide baselines for detection, vehicle re-identification, and single- and multi-camera tracking. Code and data are publicly available. 11Code and data: https://github.com/fubel/synthehicle","PeriodicalId":306545,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114995909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Human Saliency-Driven Patch-based Matching for Interpretable Post-mortem Iris Recognition 人类显著性驱动的基于补丁的可解释死后虹膜识别匹配
Pub Date : 2022-08-03 DOI: 10.1109/WACVW58289.2023.00077
Aidan Boyd, Daniel Moreira, Andrey Kuehlkamp, K. Bowyer, A. Czajka
Forensic iris recognition, as opposed to live iris recognition, is an emerging research area that leverages the discriminative power of iris biometrics to aid human examiners in their efforts to identify deceased persons. As a machine learning-based technique in a predominantly human-controlled task, forensic recognition serves as “back-up” to human expertise in the task of post-mortem identification. As such, the machine learning model must be (a) interpretable, and (b) post-mortem-specific, to account for changes in decaying eye tissue. In this work, we propose a method that satisfies both requirements, and that approaches the creation of a post-mortem-specific feature extractor in a novel way employing human perception. We first train a deep learning-based feature detector on post-mortem iris images, using annotations of image regions highlighted by humans as salient for their decision making. In effect, the method learns interpretable features directly from humans, rather than purely data-driven features. Second, regional iris codes (again, with human-driven filtering kernels) are used to pair detected iris patches, which are translated into pairwise, patch-based comparison scores. In this way, our method presents human examiners with human-understandable visual cues in order to justify the identification decision and corresponding confidence score. When tested on a dataset of post-mortem iris images collected from 259 deceased subjects, the proposed method places among the three best iris comparison tools, demonstrating better results than the commercial (non-human-interpretable) VeriEye approach. We propose a unique post-mortem iris recognition method trained with human saliency to give fully-interpretable comparison outcomes for use in the context of forensic examination, achieving state-of-the-art recognition performance.
与活体虹膜识别相反,法医虹膜识别是一个新兴的研究领域,它利用虹膜生物识别的鉴别能力来帮助人类检验人员识别死者。作为一种基于机器学习的技术,法医识别在主要由人类控制的任务中充当人类专业知识的“后备”。因此,机器学习模型必须(a)可解释,(b)死后特异性,以解释眼睛组织腐烂的变化。在这项工作中,我们提出了一种满足这两种要求的方法,并以一种利用人类感知的新方式创建了一个特定于死后的特征提取器。我们首先在死后虹膜图像上训练一个基于深度学习的特征检测器,使用人类突出显示的图像区域的注释作为他们决策的突出部分。实际上,该方法直接从人类那里学习可解释的特征,而不是纯粹的数据驱动特征。其次,使用区域虹膜编码(同样使用人类驱动的过滤核)对检测到的虹膜补丁进行配对,并将其转换为成对的、基于补丁的比较分数。通过这种方式,我们的方法为人类审查员提供了人类可以理解的视觉线索,以证明识别决策和相应的置信度得分是合理的。当在259名死者的死后虹膜图像数据集上进行测试时,所提出的方法是三种最好的虹膜比较工具之一,比商业(非人类可解释的)VeriEye方法显示出更好的结果。我们提出了一种独特的死后虹膜识别方法,经过人类显著性训练,可以在法医检查的背景下提供完全可解释的比较结果,实现最先进的识别性能。
{"title":"Human Saliency-Driven Patch-based Matching for Interpretable Post-mortem Iris Recognition","authors":"Aidan Boyd, Daniel Moreira, Andrey Kuehlkamp, K. Bowyer, A. Czajka","doi":"10.1109/WACVW58289.2023.00077","DOIUrl":"https://doi.org/10.1109/WACVW58289.2023.00077","url":null,"abstract":"Forensic iris recognition, as opposed to live iris recognition, is an emerging research area that leverages the discriminative power of iris biometrics to aid human examiners in their efforts to identify deceased persons. As a machine learning-based technique in a predominantly human-controlled task, forensic recognition serves as “back-up” to human expertise in the task of post-mortem identification. As such, the machine learning model must be (a) interpretable, and (b) post-mortem-specific, to account for changes in decaying eye tissue. In this work, we propose a method that satisfies both requirements, and that approaches the creation of a post-mortem-specific feature extractor in a novel way employing human perception. We first train a deep learning-based feature detector on post-mortem iris images, using annotations of image regions highlighted by humans as salient for their decision making. In effect, the method learns interpretable features directly from humans, rather than purely data-driven features. Second, regional iris codes (again, with human-driven filtering kernels) are used to pair detected iris patches, which are translated into pairwise, patch-based comparison scores. In this way, our method presents human examiners with human-understandable visual cues in order to justify the identification decision and corresponding confidence score. When tested on a dataset of post-mortem iris images collected from 259 deceased subjects, the proposed method places among the three best iris comparison tools, demonstrating better results than the commercial (non-human-interpretable) VeriEye approach. We propose a unique post-mortem iris recognition method trained with human saliency to give fully-interpretable comparison outcomes for use in the context of forensic examination, achieving state-of-the-art recognition performance.","PeriodicalId":306545,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128419836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Video Manipulations Beyond Faces: A Dataset with Human-Machine Analysis 超越人脸的视频操作:人机分析的数据集
Pub Date : 2022-07-26 DOI: 10.1109/WACVW58289.2023.00071
Trisha Mittal, Ritwik Sinha, Viswanathan Swaminathan, J. Collomosse, Dinesh Manocha
As tools for content editing mature, and artificial intelligence (AI) based algorithms for synthesizing media grow, the presence of manipulated content across online media is increasing. This phenomenon causes the spread of misinformation, creating a greater need to distinguish between “real” and “manipulated” content. To this end, we present Videosham, a dataset consisting of 826 videos (413 real and 413 manipulated). Many of the existing deepfake datasets focus exclusively on two types of facial manipulations-swapping with a different subject's face or altering the existing face. Videosham, on the other hand, contains more diverse, context-rich, and human-centric, high-resolution videos manipulated using a combination of 6 different spatial and temporal attacks. Our analysis shows that state-of-the-art manipulation detection algorithms only work for a few specific attacks and do not scale well on Videosham. We performed a user study on Amazon Mechanical Turk with 1200 participants to understand if they can differentiate between the real and manipulated videos in Videosham. Finally, we dig deeper into the strengths and weaknesses of performances by humans and SOTA-algorithms to identify gaps that need to be filled with better AI algorithms. We present the dataset here11VideoSham dataset link..
随着内容编辑工具的成熟,以及基于人工智能(AI)的媒体合成算法的发展,在线媒体上被操纵的内容越来越多。这种现象导致了错误信息的传播,更需要区分“真实”和“被操纵”的内容。为此,我们提出了Videosham,这是一个由826个视频组成的数据集(413个真实视频和413个操纵视频)。许多现有的深度伪造数据集只关注两种类型的面部操作——与不同受试者的面部交换或改变现有的面部。另一方面,Videosham包含更多样化、内容丰富、以人为中心的高分辨率视频,使用6种不同的空间和时间攻击的组合进行操纵。我们的分析表明,最先进的操作检测算法仅适用于少数特定的攻击,并且在Videosham上不能很好地扩展。我们在亚马逊土耳其机器人上对1200名参与者进行了一项用户研究,以了解他们是否能区分Videosham中的真实视频和伪造视频。最后,我们深入挖掘人类和sota算法性能的优缺点,以确定需要用更好的人工智能算法填补的空白。我们在这里展示了数据集。
{"title":"Video Manipulations Beyond Faces: A Dataset with Human-Machine Analysis","authors":"Trisha Mittal, Ritwik Sinha, Viswanathan Swaminathan, J. Collomosse, Dinesh Manocha","doi":"10.1109/WACVW58289.2023.00071","DOIUrl":"https://doi.org/10.1109/WACVW58289.2023.00071","url":null,"abstract":"As tools for content editing mature, and artificial intelligence (AI) based algorithms for synthesizing media grow, the presence of manipulated content across online media is increasing. This phenomenon causes the spread of misinformation, creating a greater need to distinguish between “real” and “manipulated” content. To this end, we present Videosham, a dataset consisting of 826 videos (413 real and 413 manipulated). Many of the existing deepfake datasets focus exclusively on two types of facial manipulations-swapping with a different subject's face or altering the existing face. Videosham, on the other hand, contains more diverse, context-rich, and human-centric, high-resolution videos manipulated using a combination of 6 different spatial and temporal attacks. Our analysis shows that state-of-the-art manipulation detection algorithms only work for a few specific attacks and do not scale well on Videosham. We performed a user study on Amazon Mechanical Turk with 1200 participants to understand if they can differentiate between the real and manipulated videos in Videosham. Finally, we dig deeper into the strengths and weaknesses of performances by humans and SOTA-algorithms to identify gaps that need to be filled with better AI algorithms. We present the dataset here11VideoSham dataset link..","PeriodicalId":306545,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115449442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Masked Autoencoder for Self-Supervised Pre-training on Lidar Point Clouds 用于激光雷达点云自监督预训练的掩膜自编码器
Pub Date : 2022-07-01 DOI: 10.1109/WACVW58289.2023.00039
Georg Hess, Johan Jaxing, Elias Svensson, David Hagerman, Christoffer Petersson, Lennart Svensson
Masked autoencoding has become a successful pretraining paradigm for Transformer models for text, images, and, recently, point clouds. Raw automotive datasets are suitable candidates for self-supervised pre-training as they gener-ally are cheap to collect compared to annotations for tasks like 3D object detection (OD). However, the development of masked autoencoders for point clouds has focused solely on synthetic and indoor data. Consequently, existing meth-ods have tailored their representations and models toward small and dense point clouds with homogeneous point den-sities. In this work, we study masked autoencoding for point clouds in an automotive setting, which are sparse and for which the point density can vary drastically among ob-jects in the same scene. To this end, we propose Voxel-MAE, a simple masked autoencoding pre-training scheme designed for voxel representations. We pre-train the back-bone of a Transformer-based 3D object detector to reconstruct masked voxels and to distinguish between empty and non-empty voxels. Our method improves the 3D OD performance by 1.75 mAP points and 1.05 NDS on the challenging nuScenes dataset. Further, we show that by pre-training with Voxel-MAE, we require only 40% of the annotated data to outperform a randomly initialized equivalent. Code is available at https://github.com/georghess/voxel-mae.
掩码自动编码已经成为文本、图像和最近的点云Transformer模型的成功预训练范例。原始汽车数据集是自我监督预训练的合适候选者,因为与3D对象检测(OD)等任务的注释相比,它们通常收集成本较低。然而,针对点云的掩蔽自编码器的开发仅仅集中在合成和室内数据上。因此,现有的方法将其表示和模型定制为具有均匀点密度的小而密集的点云。在这项工作中,我们研究了汽车环境中点云的掩模自动编码,这些点云是稀疏的,并且在同一场景中,点密度在不同的物体之间会有很大的变化。为此,我们提出了voxel - mae,这是一种简单的屏蔽自动编码预训练方案,用于体素表示。我们对基于变形金刚的3D物体检测器的主干进行了预训练,以重建蒙面体素,并区分空体素和非空体素。在具有挑战性的nuScenes数据集上,我们的方法将3D OD性能提高了1.75个mAP点和1.05个NDS。此外,我们表明,通过使用Voxel-MAE进行预训练,我们只需要40%的注释数据就可以优于随机初始化的等效数据。代码可从https://github.com/georghess/voxel-mae获得。
{"title":"Masked Autoencoder for Self-Supervised Pre-training on Lidar Point Clouds","authors":"Georg Hess, Johan Jaxing, Elias Svensson, David Hagerman, Christoffer Petersson, Lennart Svensson","doi":"10.1109/WACVW58289.2023.00039","DOIUrl":"https://doi.org/10.1109/WACVW58289.2023.00039","url":null,"abstract":"Masked autoencoding has become a successful pretraining paradigm for Transformer models for text, images, and, recently, point clouds. Raw automotive datasets are suitable candidates for self-supervised pre-training as they gener-ally are cheap to collect compared to annotations for tasks like 3D object detection (OD). However, the development of masked autoencoders for point clouds has focused solely on synthetic and indoor data. Consequently, existing meth-ods have tailored their representations and models toward small and dense point clouds with homogeneous point den-sities. In this work, we study masked autoencoding for point clouds in an automotive setting, which are sparse and for which the point density can vary drastically among ob-jects in the same scene. To this end, we propose Voxel-MAE, a simple masked autoencoding pre-training scheme designed for voxel representations. We pre-train the back-bone of a Transformer-based 3D object detector to reconstruct masked voxels and to distinguish between empty and non-empty voxels. Our method improves the 3D OD performance by 1.75 mAP points and 1.05 NDS on the challenging nuScenes dataset. Further, we show that by pre-training with Voxel-MAE, we require only 40% of the annotated data to outperform a randomly initialized equivalent. Code is available at https://github.com/georghess/voxel-mae.","PeriodicalId":306545,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132117827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
期刊
2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1