首页 > 最新文献

IEEE transactions on pattern analysis and machine intelligence最新文献

英文 中文
EventHDR: From Event to High-Speed HDR Videos and Beyond. EventHDR:从事件到高速 HDR 视频及其他。
Pub Date : 2024-10-09 DOI: 10.1109/TPAMI.2024.3469571
Yunhao Zou, Ying Fu, Tsuyoshi Takatani, Yinqiang Zheng

Event cameras are innovative neuromorphic sensors that asynchronously capture the scene dynamics. Due to the event-triggering mechanism, such cameras record event streams with much shorter response latency and higher intensity sensitivity compared to conventional cameras. On the basis of these features, previous works have attempted to reconstruct high dynamic range (HDR) videos from events, but have either suffered from unrealistic artifacts or failed to provide sufficiently high frame rates. In this paper, we present a recurrent convolutional neural network that reconstruct high-speed HDR videos from event sequences, with a key frame guidance to prevent potential error accumulation caused by the sparse event data. Additionally, to address the problem of severely limited real dataset, we develop a new optical system to collect a real-world dataset with paired high-speed HDR videos and event streams, facilitating future research in this field. Our dataset provides the first real paired dataset for event-to-HDR reconstruction, avoiding potential inaccuracies from simulation strategies. Experimental results demonstrate that our method can generate high-quality, high-speed HDR videos. We further explore the potential of our work in cross-camera reconstruction and downstream computer vision tasks, including object detection, panoramic segmentation, optical flow estimation, and monocular depth estimation under HDR scenarios.

事件相机是一种创新的神经形态传感器,可异步捕捉场景动态。由于采用了事件触发机制,与传统相机相比,这类相机记录事件流的响应延迟更短,强度灵敏度更高。基于这些特点,以前的研究曾尝试从事件中重建高动态范围(HDR)视频,但要么出现不真实的伪影,要么无法提供足够高的帧率。在本文中,我们提出了一种循环卷积神经网络,可从事件序列重建高速 HDR 视频,并通过关键帧引导来防止稀疏事件数据造成的潜在误差累积。此外,为了解决真实数据集非常有限的问题,我们开发了一种新的光学系统,以收集真实世界中高速 HDR 视频和事件流的配对数据集,从而促进该领域未来的研究。我们的数据集为事件到 HDR 重建提供了首个真实配对数据集,避免了模拟策略可能带来的不准确性。实验结果表明,我们的方法可以生成高质量、高速的 HDR 视频。我们进一步探索了我们的工作在跨相机重建和下游计算机视觉任务中的潜力,包括对象检测、全景分割、光流估计和 HDR 场景下的单目深度估计。
{"title":"EventHDR: From Event to High-Speed HDR Videos and Beyond.","authors":"Yunhao Zou, Ying Fu, Tsuyoshi Takatani, Yinqiang Zheng","doi":"10.1109/TPAMI.2024.3469571","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3469571","url":null,"abstract":"<p><p>Event cameras are innovative neuromorphic sensors that asynchronously capture the scene dynamics. Due to the event-triggering mechanism, such cameras record event streams with much shorter response latency and higher intensity sensitivity compared to conventional cameras. On the basis of these features, previous works have attempted to reconstruct high dynamic range (HDR) videos from events, but have either suffered from unrealistic artifacts or failed to provide sufficiently high frame rates. In this paper, we present a recurrent convolutional neural network that reconstruct high-speed HDR videos from event sequences, with a key frame guidance to prevent potential error accumulation caused by the sparse event data. Additionally, to address the problem of severely limited real dataset, we develop a new optical system to collect a real-world dataset with paired high-speed HDR videos and event streams, facilitating future research in this field. Our dataset provides the first real paired dataset for event-to-HDR reconstruction, avoiding potential inaccuracies from simulation strategies. Experimental results demonstrate that our method can generate high-quality, high-speed HDR videos. We further explore the potential of our work in cross-camera reconstruction and downstream computer vision tasks, including object detection, panoramic segmentation, optical flow estimation, and monocular depth estimation under HDR scenarios.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142396306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pixel is All You Need: Adversarial Spatio-Temporal Ensemble Active Learning for Salient Object Detection. 像素就是你所需要的一切:用于显著物体检测的对抗性时空集合主动学习。
Pub Date : 2024-10-09 DOI: 10.1109/TPAMI.2024.3476683
Zhenyu Wu, Wei Wang, Lin Wang, Yacong Li, Fengmao Lv, Qing Xia, Chenglizhao Chen, Aimin Hao, Shuo Li

Although weakly-supervised techniques can reduce the labeling effort, it is unclear whether a saliency model trained with weakly-supervised data (e.g., point annotation) can achieve the equivalent performance of its fully-supervised version. This paper attempts to answer this unexplored question by proving a hypothesis: there is a point-labeled dataset where saliency models trained on it can achieve equivalent performance when trained on the densely annotated dataset. To prove this conjecture, we proposed a novel yet effective adversarial spatio-temporal ensemble active learning. Our contributions are four- fold: 1) Our proposed adversarial attack triggering uncertainty can conquer the overconfidence of existing active learning methods and accurately locate these uncertain pixels. 2) Our proposed spatio-temporal ensemble strategy not only achieves outstanding performance but significantly reduces the model's computational cost. 3) Our proposed relationship-aware diversity sampling can conquer oversampling while boosting model performance. 4) We provide theoretical proof for the existence of such a point-labeled dataset. Experimental results show that our approach can find such a point-labeled dataset, where a saliency model trained on it obtained 98%-99% performance of its fully-supervised version with only ten annotated points per image. The code is available at https://github.com/wuzhenyubuaa/ASTE-AL.

虽然弱监督技术可以减少标注工作量,但用弱监督数据(如点标注)训练的显著性模型是否能达到与其完全监督版本相当的性能,目前还不清楚。本文试图通过证明一个假设来回答这个尚未探索的问题:存在一个点标注数据集,在该数据集上训练的显著性模型可以达到在密集标注数据集上训练的同等性能。为了证明这一猜想,我们提出了一种新颖而有效的对抗性时空集合主动学习方法。我们的贡献包括四个方面:1)我们提出的触发不确定性的对抗性攻击可以克服现有主动学习方法的过度自信,并准确定位这些不确定像素。2) 我们提出的时空集合策略不仅实现了出色的性能,而且大大降低了模型的计算成本。3) 我们提出的关系感知多样性采样可以克服超采样,同时提高模型性能。4) 我们为这种点标记数据集的存在提供了理论证明。实验结果表明,我们的方法可以找到这样一个点标注数据集,在该数据集上训练的显著性模型在每幅图像只有十个标注点的情况下,性能达到了其完全监督版本的 98%-99%。代码见 https://github.com/wuzhenyubuaa/ASTE-AL。
{"title":"Pixel is All You Need: Adversarial Spatio-Temporal Ensemble Active Learning for Salient Object Detection.","authors":"Zhenyu Wu, Wei Wang, Lin Wang, Yacong Li, Fengmao Lv, Qing Xia, Chenglizhao Chen, Aimin Hao, Shuo Li","doi":"10.1109/TPAMI.2024.3476683","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3476683","url":null,"abstract":"<p><p>Although weakly-supervised techniques can reduce the labeling effort, it is unclear whether a saliency model trained with weakly-supervised data (e.g., point annotation) can achieve the equivalent performance of its fully-supervised version. This paper attempts to answer this unexplored question by proving a hypothesis: there is a point-labeled dataset where saliency models trained on it can achieve equivalent performance when trained on the densely annotated dataset. To prove this conjecture, we proposed a novel yet effective adversarial spatio-temporal ensemble active learning. Our contributions are four- fold: 1) Our proposed adversarial attack triggering uncertainty can conquer the overconfidence of existing active learning methods and accurately locate these uncertain pixels. 2) Our proposed spatio-temporal ensemble strategy not only achieves outstanding performance but significantly reduces the model's computational cost. 3) Our proposed relationship-aware diversity sampling can conquer oversampling while boosting model performance. 4) We provide theoretical proof for the existence of such a point-labeled dataset. Experimental results show that our approach can find such a point-labeled dataset, where a saliency model trained on it obtained 98%-99% performance of its fully-supervised version with only ten annotated points per image. The code is available at https://github.com/wuzhenyubuaa/ASTE-AL.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142396308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Latent Diffusion Enhanced Rectangle Transformer for Hyperspectral Image Restoration. 用于高光谱图像复原的潜在扩散增强矩形变换器
Pub Date : 2024-10-09 DOI: 10.1109/TPAMI.2024.3475249
Miaoyu Li, Ying Fu, Tao Zhang, Ji Liu, Dejing Dou, Chenggang Yan, Yulun Zhang

The restoration of hyperspectral image (HSI) plays a pivotal role in subsequent hyperspectral image applications. Despite the remarkable capabilities of deep learning, current HSI restoration methods face challenges in effectively exploring the spatial non-local self-similarity and spectral low-rank property inherently embedded with HSIs. This paper addresses these challenges by introducing a latent diffusion enhanced rectangle Transformer for HSI restoration, tackling the non-local spatial similarity and HSI-specific latent diffusion low-rank property. In order to effectively capture non-local spatial similarity, we propose the multi-shape spatial rectangle self-attention module in both horizontal and vertical directions, enabling the model to utilize informative spatial regions for HSI restoration. Meanwhile, we propose a spectral latent diffusion enhancement module that generates the image-specific latent dictionary based on the content of HSI for low-rank vector extraction and representation. This module utilizes a diffusion model to generatively obtain representations of global low-rank vectors, thereby aligning more closely with the desired HSI. A series of comprehensive experiments were carried out on four common hyperspectral image restoration tasks, including HSI denoising, HSI super-resolution, HSI reconstruction, and HSI inpainting. The results of these experiments highlight the effectiveness of our proposed method, as demonstrated by improvements in both objective metrics and subjective visual quality.

高光谱图像(HSI)的修复在后续的高光谱图像应用中起着举足轻重的作用。尽管深度学习具有卓越的能力,但目前的高光谱图像复原方法在有效探索高光谱图像固有的空间非局部自相似性和光谱低秩属性方面仍面临挑战。本文针对这些挑战,引入了用于 HSI 修复的潜扩散增强矩形变换器,以解决非局部空间相似性和 HSI 特有的潜扩散低秩属性问题。为了有效捕捉非局部空间相似性,我们提出了水平和垂直方向的多形状空间矩形自关注模块,使模型能够利用信息空间区域进行人脸识别还原。同时,我们还提出了光谱潜在扩散增强模块,该模块可根据 HSI 的内容生成特定图像的潜在字典,用于低秩向量提取和表示。该模块利用扩散模型生成全局低秩向量的表示,从而与所需的 HSI 更为接近。对四种常见的高光谱图像修复任务进行了一系列综合实验,包括 HSI 去噪、HSI 超分辨率、HSI 重建和 HSI 内绘。这些实验的结果凸显了我们所提出方法的有效性,客观指标和主观视觉质量的改善都证明了这一点。
{"title":"Latent Diffusion Enhanced Rectangle Transformer for Hyperspectral Image Restoration.","authors":"Miaoyu Li, Ying Fu, Tao Zhang, Ji Liu, Dejing Dou, Chenggang Yan, Yulun Zhang","doi":"10.1109/TPAMI.2024.3475249","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3475249","url":null,"abstract":"<p><p>The restoration of hyperspectral image (HSI) plays a pivotal role in subsequent hyperspectral image applications. Despite the remarkable capabilities of deep learning, current HSI restoration methods face challenges in effectively exploring the spatial non-local self-similarity and spectral low-rank property inherently embedded with HSIs. This paper addresses these challenges by introducing a latent diffusion enhanced rectangle Transformer for HSI restoration, tackling the non-local spatial similarity and HSI-specific latent diffusion low-rank property. In order to effectively capture non-local spatial similarity, we propose the multi-shape spatial rectangle self-attention module in both horizontal and vertical directions, enabling the model to utilize informative spatial regions for HSI restoration. Meanwhile, we propose a spectral latent diffusion enhancement module that generates the image-specific latent dictionary based on the content of HSI for low-rank vector extraction and representation. This module utilizes a diffusion model to generatively obtain representations of global low-rank vectors, thereby aligning more closely with the desired HSI. A series of comprehensive experiments were carried out on four common hyperspectral image restoration tasks, including HSI denoising, HSI super-resolution, HSI reconstruction, and HSI inpainting. The results of these experiments highlight the effectiveness of our proposed method, as demonstrated by improvements in both objective metrics and subjective visual quality.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142396307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NCMNet: Neighbor Consistency Mining Network for Two-View Correspondence Pruning NCMNet:用于双视图对应性剪枝的邻居一致性挖掘网络。
Pub Date : 2024-10-04 DOI: 10.1109/TPAMI.2024.3462453
Xin Liu;Rong Qin;Junchi Yan;Jufeng Yang
Correspondence pruning plays a crucial role in a variety of feature matching based tasks, which aims at identifying correct correspondences (inliers) from initial ones. Seeking consistent $k$-nearest neighbors in both coordinate and feature spaces is a prevalent strategy employed in previous approaches. However, the vicinity of an inlier contains numerous irregular false correspondences (outliers), which leads them to mistakenly become neighbors according to the similarity constraint of nearest neighbors. To tackle this issue, we propose a global-graph space to seek consistent neighbors with similar graph structures. This is achieved by using a global connected graph to explicitly render the affinity relationship between correspondences based on the spatial and feature consistency. Furthermore, to enhance the robustness of method for various matching scenes, we develop a neighbor consistency block to adequately leverage the potential of three types of neighbors. The consistency can be progressively mined by sequentially extracting intra-neighbor context and exploring inter-neighbor interactions. Ultimately, we present a Neighbor Consistency Mining Network (NCMNet) to estimate the parametric models and remove outliers. Extensive experimental results demonstrate that the proposed method outperforms other state-of-the-art methods on various benchmarks for two-view geometry estimation. Meanwhile, four extended tasks, including remote sensing image registration, point cloud registration, 3D reconstruction, and visual localization, are conducted to test the generalization ability.
对应关系剪枝在各种基于特征匹配的任务中起着至关重要的作用,其目的是从初始对应关系中识别出正确的对应关系(离群值)。在坐标空间和特征空间中寻找一致的 k 近邻是以往方法中普遍采用的策略。然而,离群值附近包含大量不规则的虚假对应关系(离群值),这导致它们根据近邻的相似性约束错误地成为邻居。为了解决这个问题,我们提出了一个全局图空间,以寻求具有相似图结构的一致邻居。这是通过使用全局连接图来明确呈现基于空间和特征一致性的对应关系之间的亲缘关系来实现的。此外,为了增强该方法在各种匹配场景下的鲁棒性,我们开发了一个邻居一致性模块,以充分发挥三类邻居的潜力。通过依次提取邻居内部上下文和探索邻居之间的相互作用,可以逐步挖掘一致性。最后,我们提出了一个邻居一致性挖掘网络(NCMNet),用于估计参数模型和去除异常值。广泛的实验结果表明,在双视角几何估算的各种基准测试中,所提出的方法优于其他最先进的方法。同时,还进行了四项扩展任务,包括遥感图像配准、点云配准、三维重建和视觉定位,以测试其泛化能力。
{"title":"NCMNet: Neighbor Consistency Mining Network for Two-View Correspondence Pruning","authors":"Xin Liu;Rong Qin;Junchi Yan;Jufeng Yang","doi":"10.1109/TPAMI.2024.3462453","DOIUrl":"10.1109/TPAMI.2024.3462453","url":null,"abstract":"Correspondence pruning plays a crucial role in a variety of feature matching based tasks, which aims at identifying correct correspondences (inliers) from initial ones. Seeking consistent \u0000<inline-formula><tex-math>$k$</tex-math></inline-formula>\u0000-nearest neighbors in both coordinate and feature spaces is a prevalent strategy employed in previous approaches. However, the vicinity of an inlier contains numerous irregular false correspondences (outliers), which leads them to mistakenly become neighbors according to the similarity constraint of nearest neighbors. To tackle this issue, we propose a global-graph space to seek consistent neighbors with similar graph structures. This is achieved by using a global connected graph to explicitly render the affinity relationship between correspondences based on the spatial and feature consistency. Furthermore, to enhance the robustness of method for various matching scenes, we develop a neighbor consistency block to adequately leverage the potential of three types of neighbors. The consistency can be progressively mined by sequentially extracting intra-neighbor context and exploring inter-neighbor interactions. Ultimately, we present a Neighbor Consistency Mining Network (NCMNet) to estimate the parametric models and remove outliers. Extensive experimental results demonstrate that the proposed method outperforms other state-of-the-art methods on various benchmarks for two-view geometry estimation. Meanwhile, four extended tasks, including remote sensing image registration, point cloud registration, 3D reconstruction, and visual localization, are conducted to test the generalization ability.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"46 12","pages":"11254-11272"},"PeriodicalIF":0.0,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142376399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Competing for Pixels: A Self-play Algorithm for Weakly-supervised Semantic Segmentation. 争夺像素:弱监督语义分割的自娱算法
Pub Date : 2024-10-03 DOI: 10.1109/TPAMI.2024.3474094
Shaheer U Saeed, Shiqi Huang, Joao Ramalhinho, Iani J M B Gayo, Nina Montana-Brown, Ester Bonmati, Stephen P Pereira, Brian Davidson, Dean C Barratt, Matthew J Clarkson, Yipeng Hu

Weakly-supervised semantic segmentation (WSSS) methods, reliant on image-level labels indicating object presence, lack explicit correspondence between labels and regions of interest (ROIs), posing a significant challenge. Despite this, WSSS methods have attracted attention due to their much lower annotation costs compared to fully-supervised segmentation. Leveraging reinforcement learning (RL) self-play, we propose a novel WSSS method that gamifies image segmentation of a ROI. We formulate segmentation as a competition between two agents that compete to select ROI-containing patches until exhaustion of all such patches. The score at each time-step, used to compute the reward for agent training, represents likelihood of object presence within the selection, determined by an object presence detector pre-trained using only image-level binary classification labels of object presence. Additionally, we propose a game termination condition that can be called by either side upon exhaustion of all ROI-containing patches, followed by the selection of a final patch from each. Upon termination, the agent is incentivised if ROI-containing patches are exhausted or disincentivised if a ROI-containing patch is found by the competitor. This competitive setup ensures minimisation of over- or under-segmentation, a common problem with WSSS methods. Extensive experimentation across four datasets demonstrates significant performance improvements over recent state-of-the-art methods. Code: https://github.com/s-sd/spurl/tree/main/wss.

弱监督语义分割(WSSS)方法依赖于图像级标签来指示物体的存在,但标签与感兴趣区域(ROI)之间缺乏明确的对应关系,这带来了巨大的挑战。尽管如此,WSSS 方法因其注释成本比完全监督分割低得多而备受关注。利用强化学习(RL)的自我游戏功能,我们提出了一种新颖的 WSSS 方法,将 ROI 的图像分割游戏化。我们将分割过程设定为两个代理之间的竞争,他们竞相选择包含 ROI 的补丁,直到耗尽所有此类补丁为止。每个时间步骤的得分用于计算代理训练的奖励,代表选择范围内出现物体的可能性,该可能性由仅使用图像级二进制物体存在分类标签预先训练的物体存在检测器确定。此外,我们还提出了一个游戏终止条件,任何一方都可以在耗尽所有包含 ROI 的补丁后调用该条件,然后从每个补丁中选择一个最终补丁。游戏终止时,如果包含 ROI 的补丁被耗尽,代理将受到奖励;如果竞争对手找到了包含 ROI 的补丁,代理将受到惩罚。这种竞争性设置可确保最大限度地减少过度或不足分割,这是 WSSS 方法的常见问题。在四个数据集上进行的广泛实验表明,与最新的先进方法相比,该方法的性能有了显著提高。代码:https://github.com/s-sd/spurl/tree/main/wss。
{"title":"Competing for Pixels: A Self-play Algorithm for Weakly-supervised Semantic Segmentation.","authors":"Shaheer U Saeed, Shiqi Huang, Joao Ramalhinho, Iani J M B Gayo, Nina Montana-Brown, Ester Bonmati, Stephen P Pereira, Brian Davidson, Dean C Barratt, Matthew J Clarkson, Yipeng Hu","doi":"10.1109/TPAMI.2024.3474094","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3474094","url":null,"abstract":"<p><p>Weakly-supervised semantic segmentation (WSSS) methods, reliant on image-level labels indicating object presence, lack explicit correspondence between labels and regions of interest (ROIs), posing a significant challenge. Despite this, WSSS methods have attracted attention due to their much lower annotation costs compared to fully-supervised segmentation. Leveraging reinforcement learning (RL) self-play, we propose a novel WSSS method that gamifies image segmentation of a ROI. We formulate segmentation as a competition between two agents that compete to select ROI-containing patches until exhaustion of all such patches. The score at each time-step, used to compute the reward for agent training, represents likelihood of object presence within the selection, determined by an object presence detector pre-trained using only image-level binary classification labels of object presence. Additionally, we propose a game termination condition that can be called by either side upon exhaustion of all ROI-containing patches, followed by the selection of a final patch from each. Upon termination, the agent is incentivised if ROI-containing patches are exhausted or disincentivised if a ROI-containing patch is found by the competitor. This competitive setup ensures minimisation of over- or under-segmentation, a common problem with WSSS methods. Extensive experimentation across four datasets demonstrates significant performance improvements over recent state-of-the-art methods. Code: https://github.com/s-sd/spurl/tree/main/wss.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142373925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optical Flow as Spatial-Temporal Attention Learners 光流作为时空注意力学习器
Pub Date : 2024-10-03 DOI: 10.1109/TPAMI.2024.3463648
Yawen Lu;Cheng Han;Qifan Wang;Heng Fan;Zhaodan Kong;Dongfang Liu;Yingjie Chen
Optical flow is an indispensable building block for various important computer vision tasks, including motion estimation, object tracking, and disparity measurement. To date, the dominant methods are CNN-based, leaving plenty of room for improvement. In this work, we propose TransFlow, a transformer architecture for optical flow estimation. Compared to dominant CNN-based methods, TransFlow demonstrates three advantages. First, it provides more accurate correlation and trustworthy matching in flow estimation by utilizing spatial self-attention and cross-attention mechanisms between adjacent frames to effectively capture global dependencies; Second, it recovers more compromised information (e.g., occlusion and motion blur) in flow estimation through long-range temporal association in dynamic scenes; Third, it introduces a concise self-learning paradigm, eliminating the need for complex and laborious multi-stage pre-training procedures. The versatility and superiority of TransFlow extend seamlessly to 3D scene motion, yielding competitive outcomes in 3D scene flow estimation. Our approach attains state-of-the-art results on benchmark datasets such as Sintel and KITTI-15, while also exhibiting exceptional performance on downstream tasks, including video object detection using the ImageNet VID dataset, video frame interpolation using the GoPro dataset, and video stabilization using the DeepStab dataset. We believe that the effectiveness of TransFlow positions it as a flexible baseline for both optical flow and scene flow estimation, offering promising avenues for future research and development.
光流是各种重要计算机视觉任务(包括运动估计、物体跟踪和差异测量)不可或缺的组成部分。迄今为止,主流的方法都是基于 CNN 的,因此还有很大的改进空间。在这项工作中,我们提出了用于光流估计的变换器架构 TransFlow。与基于 CNN 的主流方法相比,TransFlow 具有三个优势。首先,它利用相邻帧之间的空间自注意和交叉注意机制,有效捕捉全局依赖关系,从而在光流估计中提供更准确的相关性和可信匹配;其次,它通过动态场景中的长程时间关联,在光流估计中恢复更多受损信息(如遮挡和运动模糊);第三,它引入了简洁的自学习范式,无需复杂费力的多阶段预训练程序。TransFlow 的多功能性和优越性可无缝扩展到三维场景运动,从而在三维场景流量估算中获得极具竞争力的结果。我们的方法在 Sintel 和 KITTI-15 等基准数据集上取得了最先进的结果,同时在下游任务上也表现出卓越的性能,包括使用 ImageNet VID 数据集进行视频对象检测、使用 GoPro 数据集进行视频帧插值以及使用 DeepStab 数据集进行视频稳定。我们相信,TransFlow 的有效性使其成为光流和场景流估算的灵活基准,为未来的研究和发展提供了广阔的前景。
{"title":"Optical Flow as Spatial-Temporal Attention Learners","authors":"Yawen Lu;Cheng Han;Qifan Wang;Heng Fan;Zhaodan Kong;Dongfang Liu;Yingjie Chen","doi":"10.1109/TPAMI.2024.3463648","DOIUrl":"10.1109/TPAMI.2024.3463648","url":null,"abstract":"Optical flow is an indispensable building block for various important computer vision tasks, including motion estimation, object tracking, and disparity measurement. To date, the dominant methods are CNN-based, leaving plenty of room for improvement. In this work, we propose TransFlow, a transformer architecture for optical flow estimation. Compared to dominant CNN-based methods, TransFlow demonstrates three advantages. First, it provides more accurate correlation and trustworthy matching in flow estimation by utilizing spatial self-attention and cross-attention mechanisms between adjacent frames to effectively capture global dependencies; Second, it recovers more compromised information (e.g., occlusion and motion blur) in flow estimation through long-range temporal association in dynamic scenes; Third, it introduces a concise self-learning paradigm, eliminating the need for complex and laborious multi-stage pre-training procedures. The versatility and superiority of TransFlow extend seamlessly to 3D scene motion, yielding competitive outcomes in 3D scene flow estimation. Our approach attains state-of-the-art results on benchmark datasets such as Sintel and KITTI-15, while also exhibiting exceptional performance on downstream tasks, including video object detection using the ImageNet VID dataset, video frame interpolation using the GoPro dataset, and video stabilization using the DeepStab dataset. We believe that the effectiveness of TransFlow positions it as a flexible baseline for both optical flow and scene flow estimation, offering promising avenues for future research and development.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"46 12","pages":"11491-11506"},"PeriodicalIF":0.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142373926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse Non-Local CRF With Applications. 稀疏非局部 CRF 及其应用
Pub Date : 2024-10-03 DOI: 10.1109/TPAMI.2024.3474468
Olga Veksler, Yuri Boykov

CRFs model spatial coherence in classical and deep learning computer vision. The most common CRF is called pairwise, as it connects pixel pairs. There are two types of pairwise CRF: sparse and dense. A sparse CRF connects the nearby pixels, leading to a linear number of connections in the image size. A dense CRF connects all pixel pairs, leading to a quadratic number of connections. While dense CRF is a more general model, it is much less efficient than sparse CRF. In fact, only Gaussian edge dense CRF is used in practice, and even then with approximations. We propose a new pairwise CRF, which we call sparse non-local CRF. Like dense CRF, it has non-local connections, and, therefore, it is more general than sparse CRF. Like sparse CRF, the number of connections is linear, and, therefore, our model is efficient. Besides efficiency, another advantage is that our edge weights are unrestricted. We show that our sparse non-local CRF models properties similar to that of Gaussian dense CRF. We also discuss connections to other CRF models. We demonstrate the usefulness of our model on classical and deep learning applications, for two and multiple labels.

CRF 为经典和深度学习计算机视觉中的空间一致性建模。最常见的 CRF 被称为成对 CRF,因为它连接像素对。成对 CRF 有两种类型:稀疏型和密集型。稀疏 CRF 连接的是附近的像素,因此在图像大小上连接的数量是线性的。密集 CRF 连接所有像素对,连接数为二次方。虽然密集 CRF 是一种更通用的模型,但其效率远低于稀疏 CRF。事实上,只有高斯边缘稠密 CRF 在实践中被使用,即使如此,也是近似值。我们提出了一种新的成对 CRF,称之为稀疏非局部 CRF。与密集 CRF 一样,它也具有非本地连接,因此比稀疏 CRF 更为通用。与稀疏 CRF 一样,连接数是线性的,因此我们的模型是高效的。除了效率之外,我们的另一个优势是边缘权重不受限制。我们的研究表明,我们的稀疏非局部 CRF 模型具有与高斯密集 CRF 相似的特性。我们还讨论了与其他 CRF 模型的联系。我们证明了我们的模型在经典和深度学习应用中的实用性,适用于两个和多个标签。
{"title":"Sparse Non-Local CRF With Applications.","authors":"Olga Veksler, Yuri Boykov","doi":"10.1109/TPAMI.2024.3474468","DOIUrl":"10.1109/TPAMI.2024.3474468","url":null,"abstract":"<p><p>CRFs model spatial coherence in classical and deep learning computer vision. The most common CRF is called pairwise, as it connects pixel pairs. There are two types of pairwise CRF: sparse and dense. A sparse CRF connects the nearby pixels, leading to a linear number of connections in the image size. A dense CRF connects all pixel pairs, leading to a quadratic number of connections. While dense CRF is a more general model, it is much less efficient than sparse CRF. In fact, only Gaussian edge dense CRF is used in practice, and even then with approximations. We propose a new pairwise CRF, which we call sparse non-local CRF. Like dense CRF, it has non-local connections, and, therefore, it is more general than sparse CRF. Like sparse CRF, the number of connections is linear, and, therefore, our model is efficient. Besides efficiency, another advantage is that our edge weights are unrestricted. We show that our sparse non-local CRF models properties similar to that of Gaussian dense CRF. We also discuss connections to other CRF models. We demonstrate the usefulness of our model on classical and deep learning applications, for two and multiple labels.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142373927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EuroCity Persons 2.0: A Large and Diverse Dataset of Persons in Traffic 欧洲城市人员 2.0:一个庞大而多样的交通参与者数据集。
Pub Date : 2024-10-02 DOI: 10.1109/TPAMI.2024.3471170
Sebastian Krebs;Markus Braun;Dariu M. Gavrila
We present the EuroCity Persons (ECP) 2.0 dataset, a novel image dataset for person detection, tracking and prediction in traffic. The dataset was collected on-board a vehicle driving through 29 cities in 11 European countries. It contains more than 250K unique person trajectories, in more than 2.0M images and comes with a size of 11 TB. ECP2.0 is about one order of magnitude larger than previous state-of-the-art person datasets in automotive context. It offers remarkable diversity in terms of geographical coverage, time of day, weather and seasons. We discuss the novel semi-supervised approach that was used to generate the temporally dense pseudo ground-truth (i.e., 2D bounding boxes, 3D person locations) from sparse, manual annotations at keyframes. Our approach leverages auxiliary LiDAR data for 3D uplifting and vehicle inertial sensing for ego-motion compensation. It incorporates keyframe information in a three-stage approach (tracklet generation, tracklet merging into tracks, track smoothing) for obtaining accurate person trajectories. We validate our pseudo ground-truth generation approach in ablation studies, and show that it significantly outperforms existing methods. Furthermore, we demonstrate its benefits for training and testing of state-of-the-art tracking methods. Our approach provides a speed-up factor of about 34 compared to frame-wise manual annotation. The ECP2.0 dataset is made freely available for non-commercial research use.
我们介绍了欧洲城市人员(ECP)2.0 数据集,这是一个用于交通中人员检测、跟踪和预测的新型图像数据集。该数据集是在一辆驶过 11 个欧洲国家 29 座城市的汽车上收集的。该数据集包含超过 250K 个独特的人物轨迹,图片超过 200 万张,大小为 11 TB。ECP2.0 比以往最先进的汽车人物数据集大一个数量级。该数据集在地理覆盖范围、时间、天气和季节方面具有显著的多样性。我们讨论了一种新颖的半监督方法,该方法用于从关键帧的稀疏手动注释中生成时间上密集的伪地面实况(即二维边界框、三维人物位置)。我们的方法利用辅助激光雷达数据进行三维抬升,并利用车辆惯性传感进行自我运动补偿。它将关键帧信息纳入一个三阶段方法(轨迹子生成、轨迹子合并为轨迹、轨迹平滑),以获得准确的人物轨迹。我们在消融研究中验证了我们的伪地面实况生成方法,结果表明它明显优于现有方法。此外,我们还证明了这种方法在训练和测试最先进的跟踪方法方面的优势。与逐帧人工标注相比,我们的方法提高了约 34 倍的速度。ECP2.0 数据集可免费用于非商业研究用途。
{"title":"EuroCity Persons 2.0: A Large and Diverse Dataset of Persons in Traffic","authors":"Sebastian Krebs;Markus Braun;Dariu M. Gavrila","doi":"10.1109/TPAMI.2024.3471170","DOIUrl":"10.1109/TPAMI.2024.3471170","url":null,"abstract":"We present the EuroCity Persons (ECP) 2.0 dataset, a novel image dataset for person detection, tracking and prediction in traffic. The dataset was collected on-board a vehicle driving through 29 cities in 11 European countries. It contains more than 250K unique person trajectories, in more than 2.0M images and comes with a size of 11 TB. ECP2.0 is about one order of magnitude larger than previous state-of-the-art person datasets in automotive context. It offers remarkable diversity in terms of geographical coverage, time of day, weather and seasons. We discuss the novel semi-supervised approach that was used to generate the temporally dense pseudo ground-truth (i.e., 2D bounding boxes, 3D person locations) from sparse, manual annotations at keyframes. Our approach leverages auxiliary LiDAR data for 3D uplifting and vehicle inertial sensing for ego-motion compensation. It incorporates keyframe information in a three-stage approach (tracklet generation, tracklet merging into tracks, track smoothing) for obtaining accurate person trajectories. We validate our pseudo ground-truth generation approach in ablation studies, and show that it significantly outperforms existing methods. Furthermore, we demonstrate its benefits for training and testing of state-of-the-art tracking methods. Our approach provides a speed-up factor of about 34 compared to frame-wise manual annotation. The ECP2.0 dataset is made freely available for non-commercial research use.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"46 12","pages":"10929-10943"},"PeriodicalIF":0.0,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142367960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NeMF: Neural Microphysics Fields. NeMF:神经微物理场。
Pub Date : 2024-09-30 DOI: 10.1109/TPAMI.2024.3467913
Inbal Kom Betzer, Roi Ronen, Vadim Holodovsky, Yoav Y Schechner, Ilan Koren

Inverse problems in scientific imaging often seek physical characterization of heterogeneous scene materials. The scene is thus represented by physical quantities, such as the density and sizes of particles (microphysics) across a domain. Moreover, the forward image formation model is physical. An important case is that of clouds, where microphysics in three dimensions (3D) dictate the cloud dynamics, lifetime and albedo, with implications to Earth's energy balance, sustainable energy and rainfall. Current methods, however, recover very degenerate representations of microphysics. To enable 3D volumetric recovery of all the required microphysical parameters, we introduce the neural microphysics field (NeMF). It is based on a deep neural network, whose input is multi-view polarization images. NeMF is pre-trained through supervised learning. Training relies on polarized radiative transfer, and noise modeling in polarization-sensitive sensors. The results offer unprecedented recovery, including droplet effective variance. We test NeMF in rigorous simulations and demonstrate it using real-world polarization-image data.

科学成像中的逆问题通常需要对异质场景材料进行物理特征描述。因此,场景是由物理量表示的,如整个领域中颗粒的密度和大小(微物理学)。此外,前向图像形成模型也是物理模型。一个重要的例子是云,三维(3D)微物理决定了云的动态、寿命和反照率,对地球的能量平衡、可持续能源和降雨量都有影响。然而,目前的方法只能恢复非常退化的微观物理现象。为了能够以三维体积复原所有必要的微物理参数,我们引入了神经微物理场(NeMF)。它基于深度神经网络,其输入是多视角偏振图像。NeMF 通过监督学习进行预训练。训练依赖于偏振辐射传递和偏振敏感传感器的噪声建模。结果提供了前所未有的恢复能力,包括液滴有效方差。我们对 NeMF 进行了严格的模拟测试,并使用真实世界的偏振图像数据进行了演示。
{"title":"NeMF: Neural Microphysics Fields.","authors":"Inbal Kom Betzer, Roi Ronen, Vadim Holodovsky, Yoav Y Schechner, Ilan Koren","doi":"10.1109/TPAMI.2024.3467913","DOIUrl":"10.1109/TPAMI.2024.3467913","url":null,"abstract":"<p><p>Inverse problems in scientific imaging often seek physical characterization of heterogeneous scene materials. The scene is thus represented by physical quantities, such as the density and sizes of particles (microphysics) across a domain. Moreover, the forward image formation model is physical. An important case is that of clouds, where microphysics in three dimensions (3D) dictate the cloud dynamics, lifetime and albedo, with implications to Earth's energy balance, sustainable energy and rainfall. Current methods, however, recover very degenerate representations of microphysics. To enable 3D volumetric recovery of all the required microphysical parameters, we introduce the neural microphysics field (NeMF). It is based on a deep neural network, whose input is multi-view polarization images. NeMF is pre-trained through supervised learning. Training relies on polarized radiative transfer, and noise modeling in polarization-sensitive sensors. The results offer unprecedented recovery, including droplet effective variance. We test NeMF in rigorous simulations and demonstrate it using real-world polarization-image data.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tensor Coupled Learning of Incomplete Longitudinal Features and Labels for Clinical Score Regression. 用于临床评分回归的不完整纵向特征和标签的张量耦合学习。
Pub Date : 2024-09-30 DOI: 10.1109/TPAMI.2024.3471800
Qing Xiao, Guiying Liu, Qianjin Feng, Yu Zhang, Zhenyuan Ning

Longitudinal data with incomplete entries pose a significant challenge for clinical score regression over multiple time points. Although many methods primarily estimate longitudinal scores with complete baseline features (i.e., features collected at the initial time point), such snapshot features may overlook beneficial latent longitudinal traits for generalization. Alternatively, certain completion approaches (e.g., tensor decomposition technology) have been proposed to impute incomplete longitudinal data before score estimation, most of which, however, are transductive and cannot utilize label semantics. This work presents a tensor coupled learning (TCL) paradigm of incomplete longitudinal features and labels for clinical score regression. The TCL enjoys three advantages: 1) It drives semantic-aware factor matrices and collaboratively deals with incomplete longitudinal entries (of features and labels), during which a dynamic regularizer is designed for adaptive attribute selection. 2) It establishes a closed loop connecting baseline features and the coupled factor matrices, which enables inductive inference of longitudinal scores relying on only baseline features. 3) It reinforces the information encoding of baseline data by preserving the local manifold of longitudinal feature space and detecting the temporal alteration across multiple time points. Extensive experiments demonstrate the remarkable performance improvement of our method on clinical score regression with incomplete longitudinal data.

条目不完整的纵向数据给多时间点的临床评分回归带来了巨大挑战。虽然许多方法主要通过完整的基线特征(即在初始时间点收集的特征)来估算纵向评分,但这种快照特征可能会忽略有利于归纳的潜在纵向特征。另外,还有人提出了一些完成方法(如张量分解技术),用于在估算分数之前对不完整的纵向数据进行归因,但其中大多数都是转导式的,无法利用标签语义。本研究提出了一种用于临床评分回归的不完整纵向特征和标签的张量耦合学习(TCL)范式。张量耦合学习有三个优势1) 它能驱动语义感知因子矩阵,协同处理不完整的纵向条目(特征和标签),其间设计了一个动态正则器,用于自适应属性选择。2) 它建立了一个连接基线特征和耦合因子矩阵的闭环,从而能够仅依靠基线特征对纵向分数进行归纳推理。3) 它通过保留纵向特征空间的局部流形和检测多个时间点的时间变化,加强了基线数据的信息编码。广泛的实验证明了我们的方法在不完整纵向数据的临床评分回归中取得了显著的性能改进。
{"title":"Tensor Coupled Learning of Incomplete Longitudinal Features and Labels for Clinical Score Regression.","authors":"Qing Xiao, Guiying Liu, Qianjin Feng, Yu Zhang, Zhenyuan Ning","doi":"10.1109/TPAMI.2024.3471800","DOIUrl":"10.1109/TPAMI.2024.3471800","url":null,"abstract":"<p><p>Longitudinal data with incomplete entries pose a significant challenge for clinical score regression over multiple time points. Although many methods primarily estimate longitudinal scores with complete baseline features (i.e., features collected at the initial time point), such snapshot features may overlook beneficial latent longitudinal traits for generalization. Alternatively, certain completion approaches (e.g., tensor decomposition technology) have been proposed to impute incomplete longitudinal data before score estimation, most of which, however, are transductive and cannot utilize label semantics. This work presents a tensor coupled learning (TCL) paradigm of incomplete longitudinal features and labels for clinical score regression. The TCL enjoys three advantages: 1) It drives semantic-aware factor matrices and collaboratively deals with incomplete longitudinal entries (of features and labels), during which a dynamic regularizer is designed for adaptive attribute selection. 2) It establishes a closed loop connecting baseline features and the coupled factor matrices, which enables inductive inference of longitudinal scores relying on only baseline features. 3) It reinforces the information encoding of baseline data by preserving the local manifold of longitudinal feature space and detecting the temporal alteration across multiple time points. Extensive experiments demonstrate the remarkable performance improvement of our method on clinical score regression with incomplete longitudinal data.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on pattern analysis and machine intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1