首页 > 最新文献

Computer vision - ECCV ... : ... European Conference on Computer Vision : proceedings. European Conference on Computer Vision最新文献

英文 中文
Rethinking Confidence Calibration for Failure Prediction 失效预测置信度校准的再思考
Fei Zhu, Zhen Cheng, Xu-Yao Zhang, Cheng-Lin Liu
{"title":"Rethinking Confidence Calibration for Failure Prediction","authors":"Fei Zhu, Zhen Cheng, Xu-Yao Zhang, Cheng-Lin Liu","doi":"10.1007/978-3-031-19806-9_30","DOIUrl":"https://doi.org/10.1007/978-3-031-19806-9_30","url":null,"abstract":"","PeriodicalId":72676,"journal":{"name":"Computer vision - ECCV ... : ... European Conference on Computer Vision : proceedings. European Conference on Computer Vision","volume":"8 1","pages":"518-536"},"PeriodicalIF":0.0,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88433107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
PCR-CG: Point Cloud Registration via Deep Explicit Color and Geometry PCR-CG:点云配准通过深显色和几何
Yu Zhang, Junle Yu, Xiaolin Huang, Wenhui Zhou, Ji Hou
{"title":"PCR-CG: Point Cloud Registration via Deep Explicit Color and Geometry","authors":"Yu Zhang, Junle Yu, Xiaolin Huang, Wenhui Zhou, Ji Hou","doi":"10.1007/978-3-031-20080-9_26","DOIUrl":"https://doi.org/10.1007/978-3-031-20080-9_26","url":null,"abstract":"","PeriodicalId":72676,"journal":{"name":"Computer vision - ECCV ... : ... European Conference on Computer Vision : proceedings. European Conference on Computer Vision","volume":"73 1","pages":"443-459"},"PeriodicalIF":0.0,"publicationDate":"2023-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88805254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Diverse Human Motion Prediction Guided by Multi-level Spatial-Temporal Anchors 基于多层次时空锚点的多种人体运动预测
Sirui Xu, Yu-Xiong Wang, Liangyan Gui
{"title":"Diverse Human Motion Prediction Guided by Multi-level Spatial-Temporal Anchors","authors":"Sirui Xu, Yu-Xiong Wang, Liangyan Gui","doi":"10.1007/978-3-031-20047-2_15","DOIUrl":"https://doi.org/10.1007/978-3-031-20047-2_15","url":null,"abstract":"","PeriodicalId":72676,"journal":{"name":"Computer vision - ECCV ... : ... European Conference on Computer Vision : proceedings. European Conference on Computer Vision","volume":"7 1","pages":"251-269"},"PeriodicalIF":0.0,"publicationDate":"2023-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89325419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Bridging Images and Videos: A Simple Learning Framework for Large Vocabulary Video Object Detection 桥接图像和视频:一个用于大词汇视频对象检测的简单学习框架
Sanghyun Woo, KwanYong Park, Seoung Wug Oh, In-So Kweon, Joon-Young Lee
{"title":"Bridging Images and Videos: A Simple Learning Framework for Large Vocabulary Video Object Detection","authors":"Sanghyun Woo, KwanYong Park, Seoung Wug Oh, In-So Kweon, Joon-Young Lee","doi":"10.1007/978-3-031-19806-9_14","DOIUrl":"https://doi.org/10.1007/978-3-031-19806-9_14","url":null,"abstract":"","PeriodicalId":72676,"journal":{"name":"Computer vision - ECCV ... : ... European Conference on Computer Vision : proceedings. European Conference on Computer Vision","volume":"46 1","pages":"238-258"},"PeriodicalIF":0.0,"publicationDate":"2022-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83742480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Union-Set Multi-source Model Adaptation for Semantic Segmentation 基于联合集的多源模型自适应语义分割
Zongyao Li, Ren Togo, Takahiro Ogawa, M. Haseyama
{"title":"Union-Set Multi-source Model Adaptation for Semantic Segmentation","authors":"Zongyao Li, Ren Togo, Takahiro Ogawa, M. Haseyama","doi":"10.1007/978-3-031-19818-2_33","DOIUrl":"https://doi.org/10.1007/978-3-031-19818-2_33","url":null,"abstract":"","PeriodicalId":72676,"journal":{"name":"Computer vision - ECCV ... : ... European Conference on Computer Vision : proceedings. European Conference on Computer Vision","volume":"25 1","pages":"579-595"},"PeriodicalIF":0.0,"publicationDate":"2022-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74570051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Interclass Prototype Relation for Few-Shot Segmentation 基于类间原型关系的少镜头分割
A. Okazawa
{"title":"Interclass Prototype Relation for Few-Shot Segmentation","authors":"A. Okazawa","doi":"10.1007/978-3-031-19818-2_21","DOIUrl":"https://doi.org/10.1007/978-3-031-19818-2_21","url":null,"abstract":"","PeriodicalId":72676,"journal":{"name":"Computer vision - ECCV ... : ... European Conference on Computer Vision : proceedings. European Conference on Computer Vision","volume":"25 1","pages":"362-378"},"PeriodicalIF":0.0,"publicationDate":"2022-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81399533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
3D Scene Inference from Transient Histograms 基于瞬态直方图的3D场景推断
Sacha Jungerman, A. Ingle, Yin Li, Mohit Gupta
Time-resolved image sensors that capture light at pico-to-nanosecond timescales were once limited to niche applications but are now rapidly becoming mainstream in consumer devices. We propose low-cost and low-power imaging modalities that capture scene information from minimal time-resolved image sensors with as few as one pixel. The key idea is to flood illuminate large scene patches (or the entire scene) with a pulsed light source and measure the time-resolved reflected light by integrating over the entire illuminated area. The one-dimensional measured temporal waveform, called emph{transient}, encodes both distances and albedoes at all visible scene points and as such is an aggregate proxy for the scene's 3D geometry. We explore the viability and limitations of the transient waveforms by themselves for recovering scene information, and also when combined with traditional RGB cameras. We show that plane estimation can be performed from a single transient and that using only a few more it is possible to recover a depth map of the whole scene. We also show two proof-of-concept hardware prototypes that demonstrate the feasibility of our approach for compact, mobile, and budget-limited applications.
在皮到纳秒的时间尺度上捕捉光的时间分辨率图像传感器曾经局限于小众应用,但现在正迅速成为消费设备的主流。我们提出了低成本和低功耗的成像模式,从最小的时间分辨率图像传感器捕获场景信息,只有一个像素。关键思想是用脉冲光源照射大的场景斑块(或整个场景),并通过对整个照明区域进行积分来测量时间分辨反射光。一维测量的时间波形,称为emph{瞬态},编码所有可见场景点的距离和反照率,因此是场景3D几何形状的综合代理。我们探讨了瞬态波形本身用于恢复场景信息的可行性和局限性,以及与传统RGB相机结合使用时的可行性和局限性。我们表明,平面估计可以从一个单一的瞬态执行,并且只使用几个,就有可能恢复整个场景的深度图。我们还展示了两个概念验证硬件原型,它们证明了我们的方法对于紧凑、移动和预算有限的应用程序的可行性。
{"title":"3D Scene Inference from Transient Histograms","authors":"Sacha Jungerman, A. Ingle, Yin Li, Mohit Gupta","doi":"10.48550/arXiv.2211.05094","DOIUrl":"https://doi.org/10.48550/arXiv.2211.05094","url":null,"abstract":"Time-resolved image sensors that capture light at pico-to-nanosecond timescales were once limited to niche applications but are now rapidly becoming mainstream in consumer devices. We propose low-cost and low-power imaging modalities that capture scene information from minimal time-resolved image sensors with as few as one pixel. The key idea is to flood illuminate large scene patches (or the entire scene) with a pulsed light source and measure the time-resolved reflected light by integrating over the entire illuminated area. The one-dimensional measured temporal waveform, called emph{transient}, encodes both distances and albedoes at all visible scene points and as such is an aggregate proxy for the scene's 3D geometry. We explore the viability and limitations of the transient waveforms by themselves for recovering scene information, and also when combined with traditional RGB cameras. We show that plane estimation can be performed from a single transient and that using only a few more it is possible to recover a depth map of the whole scene. We also show two proof-of-concept hardware prototypes that demonstrate the feasibility of our approach for compact, mobile, and budget-limited applications.","PeriodicalId":72676,"journal":{"name":"Computer vision - ECCV ... : ... European Conference on Computer Vision : proceedings. European Conference on Computer Vision","volume":"14 1","pages":"401-417"},"PeriodicalIF":0.0,"publicationDate":"2022-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88742523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Editable Indoor Lighting Estimation 可编辑的室内照明估计
Henrique Weber, Mathieu Garon, Jean-François Lalonde
. We present a method for estimating lighting from a single perspective image of an indoor scene. Previous methods for predicting indoor illumination usually focus on either simple, parametric lighting that lack realism, or on richer representations that are difficult or even impossible to understand or modify after prediction. We propose a pipeline that estimates a parametric light that is easy to edit and allows renderings with strong shadows, alongside with a non-parametric texture with high-frequency information necessary for realistic rendering of specular objects. Once estimated, the predictions obtained with our model are interpretable and can easily be modified by an artist/user with a few mouse clicks. Quantitative and qualitative results show that our approach makes indoor lighting estimation easier to handle by a casual user, while still producing competitive results.
。我们提出了一种从室内场景的单视角图像估计照明的方法。以前预测室内照明的方法通常要么集中在缺乏真实感的简单参数化照明上,要么集中在预测后难以甚至不可能理解或修改的更丰富的表示上。我们提出了一个管道,估计一个易于编辑的参数光,并允许具有强阴影的渲染,以及具有高频率信息的非参数纹理,这对于高光物体的逼真渲染是必要的。一旦估计,我们的模型得到的预测是可解释的,可以很容易地被艺术家/用户用鼠标点击几下修改。定量和定性结果表明,我们的方法使室内照明估计更容易被普通用户处理,同时仍然产生有竞争力的结果。
{"title":"Editable Indoor Lighting Estimation","authors":"Henrique Weber, Mathieu Garon, Jean-François Lalonde","doi":"10.48550/arXiv.2211.03928","DOIUrl":"https://doi.org/10.48550/arXiv.2211.03928","url":null,"abstract":". We present a method for estimating lighting from a single perspective image of an indoor scene. Previous methods for predicting indoor illumination usually focus on either simple, parametric lighting that lack realism, or on richer representations that are difficult or even impossible to understand or modify after prediction. We propose a pipeline that estimates a parametric light that is easy to edit and allows renderings with strong shadows, alongside with a non-parametric texture with high-frequency information necessary for realistic rendering of specular objects. Once estimated, the predictions obtained with our model are interpretable and can easily be modified by an artist/user with a few mouse clicks. Quantitative and qualitative results show that our approach makes indoor lighting estimation easier to handle by a casual user, while still producing competitive results.","PeriodicalId":72676,"journal":{"name":"Computer vision - ECCV ... : ... European Conference on Computer Vision : proceedings. European Conference on Computer Vision","volume":"26 2","pages":"677-692"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72610594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
RRSR: Reciprocal Reference-based Image Super-Resolution with Progressive Feature Alignment and Selection 基于互向参考的图像超分辨率渐进式特征对齐与选择
Lin Zhang, Xin Li, Dongliang He, Fu Li, Yili Wang, Zhao Zhang
Reference-based image super-resolution (RefSR) is a promising SR branch and has shown great potential in overcoming the limitations of single image super-resolution. While previous state-of-the-art RefSR methods mainly focus on improving the efficacy and robustness of reference feature transfer, it is generally overlooked that a well reconstructed SR image should enable better SR reconstruction for its similar LR images when it is referred to as. Therefore, in this work, we propose a reciprocal learning framework that can appropriately leverage such a fact to reinforce the learning of a RefSR network. Besides, we deliberately design a progressive feature alignment and selection module for further improving the RefSR task. The newly proposed module aligns reference-input images at multi-scale feature spaces and performs reference-aware feature selection in a progressive manner, thus more precise reference features can be transferred into the input features and the network capability is enhanced. Our reciprocal learning paradigm is model-agnostic and it can be applied to arbitrary RefSR models. We empirically show that multiple recent state-of-the-art RefSR models can be consistently improved with our reciprocal learning paradigm. Furthermore, our proposed model together with the reciprocal learning strategy sets new state-of-the-art performances on multiple benchmarks.
基于参考的图像超分辨率(RefSR)是一个很有前途的图像超分辨率分支,在克服单幅图像超分辨率的局限性方面显示出巨大的潜力。虽然之前最先进的RefSR方法主要侧重于提高参考特征转移的有效性和鲁棒性,但通常忽略了一个重建良好的SR图像应该能够更好地重建其相似的LR图像,当它被称为。因此,在这项工作中,我们提出了一个互惠学习框架,可以适当地利用这一事实来加强RefSR网络的学习。此外,为了进一步改进RefSR任务,我们特意设计了一个渐进式特征对齐和选择模块。该模块在多尺度特征空间对参考输入图像进行对齐,并逐步进行参考感知特征选择,从而将更精确的参考特征转移到输入特征中,增强了网络性能。我们的互惠学习范式是模型不可知的,它可以应用于任意的RefSR模型。我们的经验表明,多个最新的最先进的RefSR模型可以通过我们的互惠学习范式不断改进。此外,我们提出的模型与互惠学习策略一起在多个基准上设定了新的最先进的性能。
{"title":"RRSR: Reciprocal Reference-based Image Super-Resolution with Progressive Feature Alignment and Selection","authors":"Lin Zhang, Xin Li, Dongliang He, Fu Li, Yili Wang, Zhao Zhang","doi":"10.48550/arXiv.2211.04203","DOIUrl":"https://doi.org/10.48550/arXiv.2211.04203","url":null,"abstract":"Reference-based image super-resolution (RefSR) is a promising SR branch and has shown great potential in overcoming the limitations of single image super-resolution. While previous state-of-the-art RefSR methods mainly focus on improving the efficacy and robustness of reference feature transfer, it is generally overlooked that a well reconstructed SR image should enable better SR reconstruction for its similar LR images when it is referred to as. Therefore, in this work, we propose a reciprocal learning framework that can appropriately leverage such a fact to reinforce the learning of a RefSR network. Besides, we deliberately design a progressive feature alignment and selection module for further improving the RefSR task. The newly proposed module aligns reference-input images at multi-scale feature spaces and performs reference-aware feature selection in a progressive manner, thus more precise reference features can be transferred into the input features and the network capability is enhanced. Our reciprocal learning paradigm is model-agnostic and it can be applied to arbitrary RefSR models. We empirically show that multiple recent state-of-the-art RefSR models can be consistently improved with our reciprocal learning paradigm. Furthermore, our proposed model together with the reciprocal learning strategy sets new state-of-the-art performances on multiple benchmarks.","PeriodicalId":72676,"journal":{"name":"Computer vision - ECCV ... : ... European Conference on Computer Vision : proceedings. European Conference on Computer Vision","volume":"7 1","pages":"648-664"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89730799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Towards Real World HDRTV Reconstruction: A Data Synthesis-based Approach 面向现实世界的HDRTV重建:一种基于数据综合的方法
Zhen Cheng, Tao Wang, Yong Li, Fenglong Song, C. Chen, Zhiwei Xiong
Existing deep learning based HDRTV reconstruction methods assume one kind of tone mapping operators (TMOs) as the degradation procedure to synthesize SDRTV-HDRTV pairs for supervised training. In this paper, we argue that, although traditional TMOs exploit efficient dynamic range compression priors, they have several drawbacks on modeling the realistic degradation: information over-preservation, color bias and possible artifacts, making the trained reconstruction networks hard to generalize well to real-world cases. To solve this problem, we propose a learning-based data synthesis approach to learn the properties of real-world SDRTVs by integrating several tone mapping priors into both network structures and loss functions. In specific, we design a conditioned two-stream network with prior tone mapping results as a guidance to synthesize SDRTVs by both global and local transformations. To train the data synthesis network, we form a novel self-supervised content loss to constraint different aspects of the synthesized SDRTVs at regions with different brightness distributions and an adversarial loss to emphasize the details to be more realistic. To validate the effectiveness of our approach, we synthesize SDRTV-HDRTV pairs with our method and use them to train several HDRTV reconstruction networks. Then we collect two inference datasets containing both labeled and unlabeled real-world SDRTVs, respectively. Experimental results demonstrate that, the networks trained with our synthesized data generalize significantly better to these two real-world datasets than existing solutions.
现有的基于深度学习的HDRTV重建方法采用一种音调映射算子(TMOs)作为退化过程,合成SDRTV-HDRTV对进行监督训练。在本文中,我们认为,尽管传统的TMOs利用了有效的动态范围压缩先验,但它们在模拟现实退化方面存在一些缺点:信息过度保存、颜色偏差和可能的伪影,使得训练好的重建网络难以很好地推广到现实世界的情况。为了解决这个问题,我们提出了一种基于学习的数据合成方法,通过将几个音调映射先验值集成到网络结构和损失函数中来学习真实世界sdrtv的属性。具体来说,我们设计了一个有条件的两流网络,以先验的音调映射结果作为指导,通过全局和局部变换合成sdrtv。为了训练数据合成网络,我们形成了一种新的自监督内容损失来约束合成的sdrtv在不同亮度分布区域的不同方面,并形成了一种对抗损失来强调细节,使其更加逼真。为了验证该方法的有效性,我们利用该方法合成了SDRTV-HDRTV对,并用它们训练了多个HDRTV重建网络。然后,我们收集了两个推理数据集,分别包含标记和未标记的真实世界的sdrtv。实验结果表明,与现有的解决方案相比,用我们的合成数据训练的网络对这两个现实世界数据集的泛化能力明显更好。
{"title":"Towards Real World HDRTV Reconstruction: A Data Synthesis-based Approach","authors":"Zhen Cheng, Tao Wang, Yong Li, Fenglong Song, C. Chen, Zhiwei Xiong","doi":"10.48550/arXiv.2211.03058","DOIUrl":"https://doi.org/10.48550/arXiv.2211.03058","url":null,"abstract":"Existing deep learning based HDRTV reconstruction methods assume one kind of tone mapping operators (TMOs) as the degradation procedure to synthesize SDRTV-HDRTV pairs for supervised training. In this paper, we argue that, although traditional TMOs exploit efficient dynamic range compression priors, they have several drawbacks on modeling the realistic degradation: information over-preservation, color bias and possible artifacts, making the trained reconstruction networks hard to generalize well to real-world cases. To solve this problem, we propose a learning-based data synthesis approach to learn the properties of real-world SDRTVs by integrating several tone mapping priors into both network structures and loss functions. In specific, we design a conditioned two-stream network with prior tone mapping results as a guidance to synthesize SDRTVs by both global and local transformations. To train the data synthesis network, we form a novel self-supervised content loss to constraint different aspects of the synthesized SDRTVs at regions with different brightness distributions and an adversarial loss to emphasize the details to be more realistic. To validate the effectiveness of our approach, we synthesize SDRTV-HDRTV pairs with our method and use them to train several HDRTV reconstruction networks. Then we collect two inference datasets containing both labeled and unlabeled real-world SDRTVs, respectively. Experimental results demonstrate that, the networks trained with our synthesized data generalize significantly better to these two real-world datasets than existing solutions.","PeriodicalId":72676,"journal":{"name":"Computer vision - ECCV ... : ... European Conference on Computer Vision : proceedings. European Conference on Computer Vision","volume":"136 1","pages":"199-216"},"PeriodicalIF":0.0,"publicationDate":"2022-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84938790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Computer vision - ECCV ... : ... European Conference on Computer Vision : proceedings. European Conference on Computer Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1