首页 > 最新文献

Computer Vision and Image Understanding最新文献

英文 中文
CSNet: A content and structure-aware approach for color constancy 色彩稳定性的内容和结构感知方法
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2026-01-07 DOI: 10.1016/j.cviu.2026.104638
Zhuo-Ming Du , Hong-An Li , Qian Yu , Wen-He Chen , Fei-long Han
Accurate estimation and correction of global illuminant color, known as color constancy, is crucial for computational photography and computer vision but remains challenging under complex lighting conditions. We propose CSNet, an end-to-end framework that improves color constancy through a novel content-guided feature fusion approach. The input image is first decomposed into three precomputed components: mean intensity, variation magnitude, and variation direction. These components are dynamically reweighted by the Content-Weighting Network (CWN), which generates spatially varying weight maps by leveraging both local and global image features. The reweighted components are fused via the Adaptive Fusion Module (AFM) to produce an HDR-like intermediate representation. This representation is then processed by the Illumination Prediction Network (IPN), which applies semantic-aware weighting to estimate the global illuminant color as an RGB triplet. Extensive experiments on standard benchmarks demonstrate that CSNet achieves state-of-the-art performance, offering robust and visually consistent results under diverse lighting conditions. These advantages make CSNet a powerful tool for applications such as automatic photo correction and augmented reality.
全局光源颜色的准确估计和校正,即颜色常数,对于计算摄影和计算机视觉至关重要,但在复杂的照明条件下仍然具有挑战性。我们提出了CSNet,这是一个端到端框架,通过一种新颖的内容引导特征融合方法来提高颜色稳定性。首先将输入图像分解为三个预先计算的分量:平均强度、变化幅度和变化方向。这些组件由内容加权网络(Content-Weighting Network, CWN)动态地重新加权,该网络通过利用局部和全局图像特征生成空间变化的权重图。重新加权的分量通过自适应融合模块(AFM)进行融合,产生类似hdr的中间表示。然后由照明预测网络(IPN)处理该表示,该网络应用语义感知加权来估计作为RGB三元组的全局光源颜色。在标准基准测试上进行的大量实验表明,CSNet实现了最先进的性能,在不同的照明条件下提供稳健且视觉一致的结果。这些优点使CSNet成为自动照片校正和增强现实等应用程序的强大工具。
{"title":"CSNet: A content and structure-aware approach for color constancy","authors":"Zhuo-Ming Du ,&nbsp;Hong-An Li ,&nbsp;Qian Yu ,&nbsp;Wen-He Chen ,&nbsp;Fei-long Han","doi":"10.1016/j.cviu.2026.104638","DOIUrl":"10.1016/j.cviu.2026.104638","url":null,"abstract":"<div><div>Accurate estimation and correction of global illuminant color, known as color constancy, is crucial for computational photography and computer vision but remains challenging under complex lighting conditions. We propose CSNet, an end-to-end framework that improves color constancy through a novel content-guided feature fusion approach. The input image is first decomposed into three precomputed components: mean intensity, variation magnitude, and variation direction. These components are dynamically reweighted by the Content-Weighting Network (CWN), which generates spatially varying weight maps by leveraging both local and global image features. The reweighted components are fused via the Adaptive Fusion Module (AFM) to produce an HDR-like intermediate representation. This representation is then processed by the Illumination Prediction Network (IPN), which applies semantic-aware weighting to estimate the global illuminant color as an RGB triplet. Extensive experiments on standard benchmarks demonstrate that CSNet achieves state-of-the-art performance, offering robust and visually consistent results under diverse lighting conditions. These advantages make CSNet a powerful tool for applications such as automatic photo correction and augmented reality.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"264 ","pages":"Article 104638"},"PeriodicalIF":3.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STAR Block: Adaptive spatio-temporal recalibration for action quality assessment STAR块:行动质量评估的自适应时空再校准
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2026-01-12 DOI: 10.1016/j.cviu.2026.104656
Junhao Sun, Lanfei Zhao
Action Quality Assessment (AQA) aims to quantitatively evaluate the execution quality of complex human actions, which poses significant challenges due to the need for jointly modeling spatio-temporal dynamics and semantic structures. Existing approaches typically rely on static single-branch architectures, limiting their capacity to balance local fine-grained details and global rhythmic dependencies, especially in high-complexity scenarios. To address these limitations, we propose a novel Spatio-Temporal Adaptive Recalibration (STAR) Block, which enables highly discriminative representation learning via a multi-dimensional modeling strategy. Specifically, we first design a Multi-Scale Context Encoder to capture subtle local cues by leveraging parallel convolutions across spatial, temporal, and joint domains, enhancing the perception of motion details and short-term dynamics. Second, we introduce an Axial Attention-Based Global Dependency Modeling Module, which efficiently captures long-range temporal relationships while preserving the original spatio-temporal structure, thus reinforcing the understanding of phase coherence and motion rhythm. Third, a Dynamic Attention-Guided Adaptive Feature Fusion mechanism is proposed to integrate multi-path temporal semantics by assigning adaptive weights to local and global representations, enabling dynamic equilibrium in temporal modeling. Across multiple metrics, our STAR Block delivers remarkably superior performance with significant margins over state-of-the-art methods, achieving an average Spearman’s ρ improvement of 1.56% on AQA-7, 0.57% on MTL-AQA with DD supervision, and near-perfect 99.52% accuracy on FR-FS, as proven by extensive evaluations.
行动质量评估(Action Quality Assessment, AQA)旨在定量评估人类复杂行动的执行质量,但由于需要对时空动态和语义结构进行联合建模,这一研究面临着重大挑战。现有的方法通常依赖于静态的单分支架构,限制了它们平衡局部细粒度细节和全局节奏依赖性的能力,特别是在高复杂性的场景中。为了解决这些限制,我们提出了一种新的时空自适应再校准(STAR)块,它通过多维建模策略实现高度判别的表征学习。具体来说,我们首先设计了一个多尺度上下文编码器,通过利用跨空间、时间和关节域的并行卷积来捕获微妙的局部线索,增强对运动细节和短期动态的感知。其次,我们引入了一个基于轴向注意力的全局依赖建模模块,该模块在保留原始时空结构的同时有效地捕获了远程时间关系,从而加强了对相位相干性和运动节奏的理解。第三,提出了一种动态注意引导的自适应特征融合机制,通过对局部和全局表征赋予自适应权重来整合多路径时间语义,实现时间建模的动态平衡。在多个指标中,我们的STAR Block提供了显著优于最先进方法的性能,在DD监督下,AQA-7的平均Spearman ρ提高了1.56%,MTL-AQA的平均Spearman ρ提高了0.57%,FR-FS的准确率接近完美的99.52%,这得到了广泛评估的证明。
{"title":"STAR Block: Adaptive spatio-temporal recalibration for action quality assessment","authors":"Junhao Sun,&nbsp;Lanfei Zhao","doi":"10.1016/j.cviu.2026.104656","DOIUrl":"10.1016/j.cviu.2026.104656","url":null,"abstract":"<div><div>Action Quality Assessment (AQA) aims to quantitatively evaluate the execution quality of complex human actions, which poses significant challenges due to the need for jointly modeling spatio-temporal dynamics and semantic structures. Existing approaches typically rely on static single-branch architectures, limiting their capacity to balance local fine-grained details and global rhythmic dependencies, especially in high-complexity scenarios. To address these limitations, we propose a novel Spatio-Temporal Adaptive Recalibration (STAR) Block, which enables highly discriminative representation learning via a multi-dimensional modeling strategy. Specifically, we first design a Multi-Scale Context Encoder to capture subtle local cues by leveraging parallel convolutions across spatial, temporal, and joint domains, enhancing the perception of motion details and short-term dynamics. Second, we introduce an Axial Attention-Based Global Dependency Modeling Module, which efficiently captures long-range temporal relationships while preserving the original spatio-temporal structure, thus reinforcing the understanding of phase coherence and motion rhythm. Third, a Dynamic Attention-Guided Adaptive Feature Fusion mechanism is proposed to integrate multi-path temporal semantics by assigning adaptive weights to local and global representations, enabling dynamic equilibrium in temporal modeling. Across multiple metrics, our STAR Block delivers remarkably superior performance with significant margins over state-of-the-art methods, achieving an average Spearman’s <span><math><mi>ρ</mi></math></span> improvement of 1.56% on AQA-7, 0.57% on MTL-AQA with DD supervision, and near-perfect 99.52% accuracy on FR-FS, as proven by extensive evaluations.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"264 ","pages":"Article 104656"},"PeriodicalIF":3.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boundary-aware semantic segmentation for ice hockey rink registration 基于边界感知的冰球场注册语义分割
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2025-12-27 DOI: 10.1016/j.cviu.2025.104627
Zhibo Wang , Amir Nazemi , Stephie Liu , Sirisha Rambhatla , Yuhao Chen , David Clausi
Accurate registration of ice hockey rinks from broadcast video frames is fundamental to sports analytics, as it aligns the rink template and broadcast frame into a unified coordinate system for consistent player analysis. Existing approaches, including keypoint- and segmentation-based methods, often yield suboptimal homography estimation due to insufficient attention to rink boundaries. To address this, we propose a segmentation-based framework that explicitly introduces the rink boundary as a new segmentation class. To further improve accuracy, we introduce three components that enhance boundary awareness: (i) a boundary-aware loss to strengthen boundary representation, (ii) a dynamic class-weighted mechanism in homography estimation to emphasize informative regions, and (iii) a self-distillation strategy to enrich feature diversity. Experiments on the NHL and SHL datasets demonstrate that our method significantly outperforms both baselines, achieving improvements of +2.84 and +3.48 in IoUpart and IoUwhole on the NHL dataset, and +1.53 and +5.85 on the SHL dataset, respectively. Ablation studies further confirm the contribution of each component, establishing a robust solution for rink registration and a strong foundation for downstream sports vision tasks.
从广播视频帧中准确注册冰球溜冰场是体育分析的基础,因为它将溜冰场模板和广播帧对齐到统一的坐标系统中,以实现一致的球员分析。现有的方法,包括基于关键点和基于分割的方法,由于对溜冰场边界的关注不够,通常会产生次优的单应性估计。为了解决这个问题,我们提出了一个基于分割的框架,明确地将溜冰场边界作为一个新的分割类引入。为了进一步提高准确性,我们引入了三个增强边界感知的组件:(i)加强边界表示的边界感知损失,(ii)在单应性估计中强调信息区域的动态类加权机制,以及(iii)丰富特征多样性的自蒸馏策略。在NHL和SHL数据集上的实验表明,我们的方法明显优于这两个基线,在NHL数据集上IoUpart和IoUwhole分别提高了+2.84和+3.48,在SHL数据集上分别提高了+1.53和+5.85。消融研究进一步证实了每个组成部分的贡献,为溜冰场注册建立了一个强大的解决方案,并为下游运动视觉任务奠定了坚实的基础。
{"title":"Boundary-aware semantic segmentation for ice hockey rink registration","authors":"Zhibo Wang ,&nbsp;Amir Nazemi ,&nbsp;Stephie Liu ,&nbsp;Sirisha Rambhatla ,&nbsp;Yuhao Chen ,&nbsp;David Clausi","doi":"10.1016/j.cviu.2025.104627","DOIUrl":"10.1016/j.cviu.2025.104627","url":null,"abstract":"<div><div>Accurate registration of ice hockey rinks from broadcast video frames is fundamental to sports analytics, as it aligns the rink template and broadcast frame into a unified coordinate system for consistent player analysis. Existing approaches, including keypoint- and segmentation-based methods, often yield suboptimal homography estimation due to insufficient attention to rink boundaries. To address this, we propose a segmentation-based framework that explicitly introduces the rink boundary as a new segmentation class. To further improve accuracy, we introduce three components that enhance boundary awareness: (i) a boundary-aware loss to strengthen boundary representation, (ii) a dynamic class-weighted mechanism in homography estimation to emphasize informative regions, and (iii) a self-distillation strategy to enrich feature diversity. Experiments on the NHL and SHL datasets demonstrate that our method significantly outperforms both baselines, achieving improvements of <span><math><mrow><mo>+</mo><mn>2</mn><mo>.</mo><mn>84</mn></mrow></math></span> and <span><math><mrow><mo>+</mo><mn>3</mn><mo>.</mo><mn>48</mn></mrow></math></span> in IoU<sub>part</sub> and IoU<sub>whole</sub> on the NHL dataset, and <span><math><mrow><mo>+</mo><mn>1</mn><mo>.</mo><mn>53</mn></mrow></math></span> and <span><math><mrow><mo>+</mo><mn>5</mn><mo>.</mo><mn>85</mn></mrow></math></span> on the SHL dataset, respectively. Ablation studies further confirm the contribution of each component, establishing a robust solution for rink registration and a strong foundation for downstream sports vision tasks.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"264 ","pages":"Article 104627"},"PeriodicalIF":3.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D-aware virtual try-on using only 2D inputs 仅使用2D输入的3d感知虚拟试戴
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2026-01-21 DOI: 10.1016/j.cviu.2026.104661
Jaeyoon Lee , Hojoon Jung , Jongwon Choi
We present 3DFit, which is a novel 3D-aware virtual try-on framework that synthesizes realistic try-on images using only 2D inputs. Unlike previous methods that either ignore 3D body geometry or rely entirely on 3D clothing models, 3DFit utilizes 3D human meshes estimated from 2D images and adaptively transforms 3D clothing templates guided by 2D clothing images. We further introduce a warping strategy that integrates 3D information into 2D clothing images using a set of pre-designed 3D templates, enabling efficient adaptation to various body shapes and poses. As a result, our method supports accurate and personalized virtual try-on experiences. Experimental results on the VITON-HD dataset demonstrate that 3DFit outperforms existing methods in preserving garment structure and maintaining high visual quality across a wide range of body types and poses.
我们提出3DFit,这是一种新颖的3d感知虚拟试戴框架,仅使用2D输入合成逼真的试戴图像。与以往的方法忽略3D人体几何或完全依赖3D服装模型不同,3DFit利用从2D图像估计的3D人体网格,并在2D服装图像的指导下自适应变换3D服装模板。我们进一步介绍了一种翘曲策略,该策略使用一组预先设计的3D模板将3D信息集成到2D服装图像中,从而能够有效地适应各种体型和姿势。因此,我们的方法支持准确和个性化的虚拟试戴体验。VITON-HD数据集上的实验结果表明,3DFit在保留服装结构和保持高视觉质量方面优于现有方法,适用于各种体型和姿势。
{"title":"3D-aware virtual try-on using only 2D inputs","authors":"Jaeyoon Lee ,&nbsp;Hojoon Jung ,&nbsp;Jongwon Choi","doi":"10.1016/j.cviu.2026.104661","DOIUrl":"10.1016/j.cviu.2026.104661","url":null,"abstract":"<div><div>We present 3DFit, which is a novel 3D-aware virtual try-on framework that synthesizes realistic try-on images using only 2D inputs. Unlike previous methods that either ignore 3D body geometry or rely entirely on 3D clothing models, 3DFit utilizes 3D human meshes estimated from 2D images and adaptively transforms 3D clothing templates guided by 2D clothing images. We further introduce a warping strategy that integrates 3D information into 2D clothing images using a set of pre-designed 3D templates, enabling efficient adaptation to various body shapes and poses. As a result, our method supports accurate and personalized virtual try-on experiences. Experimental results on the VITON-HD dataset demonstrate that 3DFit outperforms existing methods in preserving garment structure and maintaining high visual quality across a wide range of body types and poses.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"264 ","pages":"Article 104661"},"PeriodicalIF":3.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146078309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interaction-aware representation learning for action quality assessment in freestyle skiing big air 基于交互感知表征学习的自由式滑雪大空中动作质量评估
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2026-01-21 DOI: 10.1016/j.cviu.2026.104634
Shiyue Chen , Yanchao Liu , Ziyue Wang , Xina Cheng , Takeshi Ikenaga
Freestyle skiing big air requires precise athlete–ski coordination to determine both technical difficulty and execution quality. Accurate action quality assessment in this discipline therefore necessitates explicit modeling of human–object interactions. However, most existing methods rely on video-level or human-centric representations, overlooking structured athlete-ski relationships and limiting evaluation of control and stability. To address this, we construct a freestyle skiing big air dataset with fine-grained annotations, including frame-level athlete-ski bounding boxes and performance-related metadata. Based on this dataset, we propose an interaction-aware framework that captures athlete–ski coordination by combining instance-level appearance and positional features through spatiotemporal reasoning. Furthermore, to avoid commonly used uniform sampling diluting performance-critical moments in long sequences, we introduce a training-free entropy-based sampling strategy that exploits athlete–ski geometric dynamics to identify performance-critical moments such as take-off, rotation, and landing, thereby reducing redundancy. Together, these designs address where to look and when to focus in big air assessment. Extensive experiments demonstrate that our method achieves a Spearman’s rank correlation of 0.7173 on the proposed dataset, outperforming state-of-the-art methods.
自由式滑雪大空中需要精确的运动员-滑雪板协调,以确定技术难度和执行质量。因此,在这一学科中,准确的行动质量评估需要对人与物之间的相互作用进行明确的建模。然而,大多数现有方法依赖于视频级或以人为中心的表示,忽略了结构化的运动员-滑雪关系,限制了对控制和稳定性的评估。为了解决这个问题,我们构建了一个带有细粒度注释的自由式滑雪大空气数据集,包括帧级运动员滑雪边界框和与性能相关的元数据。基于此数据集,我们提出了一个交互感知框架,该框架通过时空推理结合实例级外观和位置特征来捕获运动员-滑雪协调。此外,为了避免通常使用的均匀采样在长序列中稀释性能关键时刻,我们引入了一种基于无训练熵的采样策略,该策略利用运动员滑雪的几何动力学来识别起飞、旋转和着陆等性能关键时刻,从而减少冗余。总之,这些设计说明了在大空气评估中应该关注哪里以及何时关注。大量的实验表明,我们的方法在提出的数据集上实现了0.7173的Spearman秩相关,优于最先进的方法。
{"title":"Interaction-aware representation learning for action quality assessment in freestyle skiing big air","authors":"Shiyue Chen ,&nbsp;Yanchao Liu ,&nbsp;Ziyue Wang ,&nbsp;Xina Cheng ,&nbsp;Takeshi Ikenaga","doi":"10.1016/j.cviu.2026.104634","DOIUrl":"10.1016/j.cviu.2026.104634","url":null,"abstract":"<div><div>Freestyle skiing big air requires precise athlete–ski coordination to determine both technical difficulty and execution quality. Accurate action quality assessment in this discipline therefore necessitates explicit modeling of human–object interactions. However, most existing methods rely on video-level or human-centric representations, overlooking structured athlete-ski relationships and limiting evaluation of control and stability. To address this, we construct a freestyle skiing big air dataset with fine-grained annotations, including frame-level athlete-ski bounding boxes and performance-related metadata. Based on this dataset, we propose an interaction-aware framework that captures athlete–ski coordination by combining instance-level appearance and positional features through spatiotemporal reasoning. Furthermore, to avoid commonly used uniform sampling diluting performance-critical moments in long sequences, we introduce a training-free entropy-based sampling strategy that exploits athlete–ski geometric dynamics to identify performance-critical moments such as take-off, rotation, and landing, thereby reducing redundancy. Together, these designs address <em>where to look</em> and <em>when to focus</em> in big air assessment. Extensive experiments demonstrate that our method achieves a Spearman’s rank correlation of 0.7173 on the proposed dataset, outperforming state-of-the-art methods.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"264 ","pages":"Article 104634"},"PeriodicalIF":3.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146078741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EventSleep2: Sleep activity recognition on complete night sleep recordings with an event camera EventSleep2:使用事件相机对完整的夜间睡眠记录进行睡眠活动识别
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2026-01-02 DOI: 10.1016/j.cviu.2025.104619
Nerea Gallego , Carlos Plou , Miguel Marcos , Pablo Urcola , Luis Montesano , Eduardo Montijano , Ruben Martinez-Cantin , Ana C. Murillo
Sleep is fundamental to health, and society is more and more aware of the impact and relevance of sleep disorders. Traditional diagnostic methods, like polysomnography, are intrusive and resource-intensive. Instead, research is focusing on developing novel, less intrusive or portable methods that combine intelligent sensors with activity recognition for diagnosis support and scoring. Event cameras offer a promising alternative for automated, in-home sleep activity recognition due to their excellent low-light performance and low power consumption. This work introduces EventSleep2-data, a significant extension to the EventSleep dataset, featuring 10 complete night recordings (around 7 h each) of volunteers sleeping in their homes. Unlike the original short and controlled recordings, this new dataset captures natural, full-night sleep sessions under realistic conditions. This new data incorporates challenging real-world scene variations, an efficient movement-triggered sparse data recording pipeline, and synchronized 2-channel EEG data for a subset of recordings. We also present EventSleep2-net, a novel event-based sleep activity recognition approach with a dual-head architecture to simultaneously analyze motion classes and static poses. The model is specifically designed to handle the motion-triggered, sparse nature of complete night recordings. Unlike the original EventSleep architecture, EventSleep2-net can predict both movement and static poses even during long periods with no events. We demonstrate state-of-the-art performance on both EventSleep1-data, the original dataset, and EventSleep2-data, with comprehensive ablation studies validating our design decisions. Together, EventSleep2-data and EventSleep2-net overcome the limitations of the previous setup and enable continuous, full-night analysis for real-world sleep monitoring, significantly advancing the potential of event-based vision for sleep disorder studies. Code and data are publicly available on the webpage: https://sites.google.com/unizar.es/eventsleep.
睡眠是健康的基础,社会越来越意识到睡眠障碍的影响和相关性。传统的诊断方法,如多导睡眠图,是侵入性的和资源密集的。相反,研究的重点是开发新颖、侵入性较低或便携的方法,将智能传感器与活动识别相结合,以提供诊断支持和评分。事件相机由于其出色的弱光性能和低功耗,为自动化的家庭睡眠活动识别提供了一个有前途的选择。这项工作引入了EventSleep2-data,这是EventSleep数据集的一个重要扩展,具有10个完整的夜间记录(每个约7小时)志愿者在家中睡觉。与原始的短而受控的记录不同,这个新的数据集捕捉了现实条件下自然的、整晚的睡眠过程。这种新数据结合了具有挑战性的现实场景变化,高效的运动触发稀疏数据记录管道,以及用于记录子集的同步双通道EEG数据。我们还提出了EventSleep2-net,这是一种新颖的基于事件的睡眠活动识别方法,具有双头部结构,可以同时分析运动类和静态姿势。该模型是专门设计来处理运动触发,稀疏的性质,完整的夜间录音。与最初的EventSleep架构不同,EventSleep2-net甚至可以在没有事件的长时间内预测运动和静态姿势。我们在eventsleep1数据、原始数据集和eventsleep2数据上展示了最先进的性能,并进行了全面的消融研究,验证了我们的设计决策。EventSleep2-data和EventSleep2-net共同克服了之前设置的局限性,为现实世界的睡眠监测提供了连续的、整晚的分析,极大地推进了基于事件的视觉在睡眠障碍研究中的潜力。代码和数据可在网页上公开获取:https://sites.google.com/unizar.es/eventsleep。
{"title":"EventSleep2: Sleep activity recognition on complete night sleep recordings with an event camera","authors":"Nerea Gallego ,&nbsp;Carlos Plou ,&nbsp;Miguel Marcos ,&nbsp;Pablo Urcola ,&nbsp;Luis Montesano ,&nbsp;Eduardo Montijano ,&nbsp;Ruben Martinez-Cantin ,&nbsp;Ana C. Murillo","doi":"10.1016/j.cviu.2025.104619","DOIUrl":"10.1016/j.cviu.2025.104619","url":null,"abstract":"<div><div>Sleep is fundamental to health, and society is more and more aware of the impact and relevance of sleep disorders. Traditional diagnostic methods, like polysomnography, are intrusive and resource-intensive. Instead, research is focusing on developing novel, less intrusive or portable methods that combine intelligent sensors with activity recognition for diagnosis support and scoring. Event cameras offer a promising alternative for automated, in-home sleep activity recognition due to their excellent low-light performance and low power consumption. This work introduces <strong>EventSleep2-data</strong>, a significant extension to the EventSleep dataset, featuring 10 complete night recordings (around 7 h each) of volunteers sleeping in their homes. Unlike the original short and controlled recordings, this new dataset captures natural, full-night sleep sessions under realistic conditions. This new data incorporates challenging real-world scene variations, an efficient movement-triggered sparse data recording pipeline, and synchronized 2-channel EEG data for a subset of recordings. We also present <strong>EventSleep2-net</strong>, a novel event-based sleep activity recognition approach with a dual-head architecture to simultaneously analyze motion classes and static poses. The model is specifically designed to handle the motion-triggered, sparse nature of complete night recordings. Unlike the original EventSleep architecture, EventSleep2-net can predict both movement and static poses even during long periods with no events. We demonstrate state-of-the-art performance on both EventSleep1-data, the original dataset, and EventSleep2-data, with comprehensive ablation studies validating our design decisions. Together, EventSleep2-data and EventSleep2-net overcome the limitations of the previous setup and enable continuous, full-night analysis for real-world sleep monitoring, significantly advancing the potential of event-based vision for sleep disorder studies. Code and data are publicly available on the webpage: <span><span>https://sites.google.com/unizar.es/eventsleep</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"264 ","pages":"Article 104619"},"PeriodicalIF":3.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A dual-channel model based on multi-feature fusion for face liveness detection 基于多特征融合的双通道人脸活体检测模型
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2026-01-07 DOI: 10.1016/j.cviu.2026.104635
Bowen Xu , Yaru Sui , Longxin Liu, Zhenlong Ma, Yunlong Shi, Wentong Li, Xiaoqiang Ji
Face liveness detection algorithms are widely used in anti-spoofing applications, which guarantee the accuracy and security of face recognition systems. However, with the continuous development of technologies such as 3D printing and artificial intelligence, traditional face-liveness detection algorithms struggle to withstand spoofing attacks effectively. In this paper, we propose a multi-feature fusion algorithm using only facial video for face liveness detection. Initially, we design a dual-channel network named DC-Net. It can extract robust remote photoplethysmography signals directly from 5-second facial videos, as well as fine global texture features from the keyframes of the image sequence. Subsequently, a fusion module based on the attention mechanism is used to carry out feature-level fusion. Ultimately, we use the fully connected layer for binary classification. Our methodology was validated using the REPLAY-ATTACK dataset and the 3DMAD dataset, demonstrating that for printing attacks, screen replay attacks, and 3D mask attacks, our approach attained an accuracy of 99.79% and 100% on both datasets, respectively. Meanwhile, cross-dataset testing was conducted on the CASIA-FASD and HKBU-MARs V1+ datasets, achieving HTER of 25.56% and 0.00%, respectively. This indicates that the algorithm has good accuracy and robustness in dealing with spoofing attacks in many different scenarios, which provides important ideas and technical support for the design and implementation of reliable face recognition systems.
人脸活体检测算法广泛应用于防欺骗应用,保证了人脸识别系统的准确性和安全性。然而,随着3D打印、人工智能等技术的不断发展,传统的人脸检测算法难以有效抵御欺骗攻击。本文提出了一种仅基于人脸视频的多特征融合算法,用于人脸活体检测。最初,我们设计了一个双通道网络,命名为DC-Net。它可以直接从5秒的人脸视频中提取鲁棒的远程光体积脉搏波信号,并从图像序列的关键帧中提取精细的全局纹理特征。随后,利用基于注意机制的融合模块进行特征级融合。最后,我们使用全连通层进行二值分类。使用replay - attack数据集和3DMAD数据集验证了我们的方法,表明对于打印攻击,屏幕重播攻击和3D掩码攻击,我们的方法在两个数据集上分别达到了99.79%和100%的准确率。同时,对CASIA-FASD和HKBU-MARs V1+数据集进行了跨数据集检验,HTER分别达到25.56%和0.00%。这表明该算法在应对多种不同场景下的欺骗攻击方面具有良好的准确性和鲁棒性,为设计和实现可靠的人脸识别系统提供了重要的思想和技术支持。
{"title":"A dual-channel model based on multi-feature fusion for face liveness detection","authors":"Bowen Xu ,&nbsp;Yaru Sui ,&nbsp;Longxin Liu,&nbsp;Zhenlong Ma,&nbsp;Yunlong Shi,&nbsp;Wentong Li,&nbsp;Xiaoqiang Ji","doi":"10.1016/j.cviu.2026.104635","DOIUrl":"10.1016/j.cviu.2026.104635","url":null,"abstract":"<div><div>Face liveness detection algorithms are widely used in anti-spoofing applications, which guarantee the accuracy and security of face recognition systems. However, with the continuous development of technologies such as 3D printing and artificial intelligence, traditional face-liveness detection algorithms struggle to withstand spoofing attacks effectively. In this paper, we propose a multi-feature fusion algorithm using only facial video for face liveness detection. Initially, we design a dual-channel network named DC-Net. It can extract robust remote photoplethysmography signals directly from 5-second facial videos, as well as fine global texture features from the keyframes of the image sequence. Subsequently, a fusion module based on the attention mechanism is used to carry out feature-level fusion. Ultimately, we use the fully connected layer for binary classification. Our methodology was validated using the REPLAY-ATTACK dataset and the 3DMAD dataset, demonstrating that for printing attacks, screen replay attacks, and 3D mask attacks, our approach attained an accuracy of 99.79% and 100% on both datasets, respectively. Meanwhile, cross-dataset testing was conducted on the CASIA-FASD and HKBU-MARs V1+ datasets, achieving HTER of 25.56% and 0.00%, respectively. This indicates that the algorithm has good accuracy and robustness in dealing with spoofing attacks in many different scenarios, which provides important ideas and technical support for the design and implementation of reliable face recognition systems.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"264 ","pages":"Article 104635"},"PeriodicalIF":3.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A YOLO-OC real-time small object detection in ocean scenes 基于YOLO-OC的海洋场景小目标实时检测
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2026-01-07 DOI: 10.1016/j.cviu.2025.104625
Yang Zhang , Tao Qin , Yimin Zhou
The ocean scenes are usually intricate and complex, with low signal-to-noise ratios for tiny or distant objects and susceptible to interference from underwater background and lighting conditions, which makes the general object detection methods more difficult to be directly applied in the ocean scenes. To solve the above problems, a YOLO-OCEAN method is proposed the YOLOv5 as the baseline model. An ultra-small-scale feature layer, multi-branch feature enhancement with cross-scale fusion, a visual-transformer bridge, a CSP-connected SPPF block and dynamic activation are incorporated into the backbone and neck to improve the detection performance. More Efficient Intersection over Union regression loss function is applied to the detection head structure. Moreover, the model is re-parameterized and lightweighted to enhance the detection speed of the model. Comparison experiments have been performed with other object detection baseline models where 86.6% [email protected] and 5.1 ms inference time are achieved with the proposed YOLO-OC method, proving the real-time detection capability for small objects in ocean scenes with high accuracy.
海洋场景通常错综复杂,对于微小或远处的物体信噪比较低,容易受到水下背景和光照条件的干扰,这使得一般的目标检测方法难以直接应用于海洋场景。针对上述问题,提出了以YOLOv5为基线模型的YOLOv5 - ocean方法。为了提高检测性能,在主干和颈部引入了超小尺度特征层、跨尺度融合的多分支特征增强、视觉变压器桥接、csp连接的SPPF块和动态激活。将更有效的交联回归损失函数应用于检测头结构。此外,对模型进行了重参数化和轻量化,提高了模型的检测速度。通过与其他目标检测基线模型的对比实验,所提出的YOLO-OC方法达到了86.6% [email protected]和5.1 ms推理时间,证明了该方法对海洋场景中小目标的高精度实时检测能力。
{"title":"A YOLO-OC real-time small object detection in ocean scenes","authors":"Yang Zhang ,&nbsp;Tao Qin ,&nbsp;Yimin Zhou","doi":"10.1016/j.cviu.2025.104625","DOIUrl":"10.1016/j.cviu.2025.104625","url":null,"abstract":"<div><div>The ocean scenes are usually intricate and complex, with low signal-to-noise ratios for tiny or distant objects and susceptible to interference from underwater background and lighting conditions, which makes the general object detection methods more difficult to be directly applied in the ocean scenes. To solve the above problems, a YOLO-OCEAN method is proposed the YOLOv5 as the baseline model. An ultra-small-scale feature layer, multi-branch feature enhancement with cross-scale fusion, a visual-transformer bridge, a CSP-connected SPPF block and dynamic activation are incorporated into the backbone and neck to improve the detection performance. More Efficient Intersection over Union regression loss function is applied to the detection head structure. Moreover, the model is re-parameterized and lightweighted to enhance the detection speed of the model. Comparison experiments have been performed with other object detection baseline models where 86.6% [email protected] and 5.1 ms inference time are achieved with the proposed YOLO-OC method, proving the real-time detection capability for small objects in ocean scenes with high accuracy.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"264 ","pages":"Article 104625"},"PeriodicalIF":3.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Indoor UAV navigation using event cameras and intermediate frame reconstruction 基于事件相机和中间帧重建的室内无人机导航
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2026-01-10 DOI: 10.1016/j.cviu.2026.104650
David Tejero-Ruiz , David Solís-Martín , Francisco J. Pérez-Grau , Joaquín Borrego-Díaz
Indoor UAV navigation faces significant challenges due to GPS signal absence and limitations of conventional visual-inertial systems under challenging lighting and motion conditions. This paper presents an event-based visual-inertial odometry system that addresses these limitations through intermediate frame reconstruction from event streams combined with established odometry algorithms. The approach leverages event cameras’ unique characteristics — microsecond temporal resolution, high dynamic range (120 dB), and motion blur immunity — to maintain stable navigation performance under conditions that cause conventional systems to fail. The system achieves real-time operation at 30 Hz frame reconstruction and 20 Hz pose estimation on embedded hardware, consuming 15 W power while adding only 50 g to the UAV platform. Experimental validation in controlled indoor environments demonstrates mean absolute pose errors of 26–42 cm across different operational conditions, comparable to conventional visual-inertial systems. Critically, the system maintains stable performance during rapid lighting transitions, showing only 59% performance degradation compared to baseline conditions, while conventional cameras typically experience complete tracking failure. The results establish event-based visual-inertial odometry as a viable alternative for indoor UAV navigation, particularly in applications requiring environmental robustness over marginal accuracy improvements under optimal conditions.
由于GPS信号缺失和传统视觉惯性系统在具有挑战性的光照和运动条件下的局限性,室内无人机导航面临着重大挑战。本文提出了一种基于事件的视觉惯性里程计系统,该系统通过从事件流中重建中间帧并结合已建立的里程计算法来解决这些限制。该方法利用事件相机的独特特性——微秒时间分辨率、高动态范围(120 dB)和运动模糊免疫——在导致传统系统失效的条件下保持稳定的导航性能。该系统在嵌入式硬件上实现了30 Hz帧重构和20 Hz姿态估计的实时运行,功耗为15 W,而无人机平台仅增加50 g。在受控的室内环境中进行的实验验证表明,在不同的操作条件下,平均绝对姿势误差为26-42厘米,与传统的视觉惯性系统相当。关键是,系统在快速照明转换期间保持稳定的性能,与基线条件相比,仅显示59%的性能下降,而传统相机通常会经历完全的跟踪故障。结果表明,基于事件的视觉惯性里程计是室内无人机导航的可行替代方案,特别是在需要在最佳条件下边际精度提高的环境鲁棒性的应用中。
{"title":"Indoor UAV navigation using event cameras and intermediate frame reconstruction","authors":"David Tejero-Ruiz ,&nbsp;David Solís-Martín ,&nbsp;Francisco J. Pérez-Grau ,&nbsp;Joaquín Borrego-Díaz","doi":"10.1016/j.cviu.2026.104650","DOIUrl":"10.1016/j.cviu.2026.104650","url":null,"abstract":"<div><div>Indoor UAV navigation faces significant challenges due to GPS signal absence and limitations of conventional visual-inertial systems under challenging lighting and motion conditions. This paper presents an event-based visual-inertial odometry system that addresses these limitations through intermediate frame reconstruction from event streams combined with established odometry algorithms. The approach leverages event cameras’ unique characteristics — microsecond temporal resolution, high dynamic range (120 dB), and motion blur immunity — to maintain stable navigation performance under conditions that cause conventional systems to fail. The system achieves real-time operation at 30 Hz frame reconstruction and 20 Hz pose estimation on embedded hardware, consuming 15 W power while adding only 50 g to the UAV platform. Experimental validation in controlled indoor environments demonstrates mean absolute pose errors of 26–42 cm across different operational conditions, comparable to conventional visual-inertial systems. Critically, the system maintains stable performance during rapid lighting transitions, showing only 59% performance degradation compared to baseline conditions, while conventional cameras typically experience complete tracking failure. The results establish event-based visual-inertial odometry as a viable alternative for indoor UAV navigation, particularly in applications requiring environmental robustness over marginal accuracy improvements under optimal conditions.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"264 ","pages":"Article 104650"},"PeriodicalIF":3.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extending Large Language Models to multimodality for non-English languages 将大型语言模型扩展到非英语语言的多模态
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2025-12-30 DOI: 10.1016/j.cviu.2025.104618
Elio Musacchio , Lucia Siciliani , Pierpaolo Basile , Giovanni Semeraro
The growing popularity of Large Vision-Language Models has highlighted and intensified one of the most well-known challenges in the field of Large Language Models: training is mainly, and most of the time exclusively, conducted on English data. Consequently, the resulting models are more prone to error in non-English tasks, and this issue is exacerbated in multimodal settings that are even more complex and use task-specific datasets. Given this, research on Large Language Models has turned toward adapting them to non-English languages. However, the scarcity of open and curated resources for these languages poses a significant limitation. In this work, we aim to tackle the aforementioned challenge by exploring Large Vision-Language Models adaptation to non-English languages, using machine translation to overcome the lack of curated data. We also analyze how the evaluation of the results is influenced when training a vision-to-text adapter across different languages, examining the performance variations and challenges associated with multilingual adaptation. Finally, we highlight the importance of using open resources to ensure transparency and reproducibility of the results. Following this philosophy, we provide open access to the entire codebase of the adaptation pipeline, along with the trained models and dataset, to foster further research.1
大型视觉语言模型的日益普及凸显并加剧了大型语言模型领域最著名的挑战之一:训练主要是,而且大多数时候都是在英语数据上进行的。因此,所得到的模型在非英语任务中更容易出错,而这个问题在更复杂和使用任务特定数据集的多模式设置中更加严重。鉴于此,对大型语言模型的研究已经转向使它们适应非英语语言。然而,这些语言的开放和管理资源的稀缺性构成了一个重大的限制。在这项工作中,我们的目标是通过探索适应非英语语言的大型视觉语言模型来解决上述挑战,使用机器翻译来克服缺乏精选数据。我们还分析了跨不同语言训练视觉到文本适配器时对结果评估的影响,研究了与多语言适应相关的性能变化和挑战。最后,我们强调使用开放资源以确保结果的透明度和可重复性的重要性。遵循这一理念,我们提供了对适应管道的整个代码库的开放访问,以及训练过的模型和数据集,以促进进一步的研究
{"title":"Extending Large Language Models to multimodality for non-English languages","authors":"Elio Musacchio ,&nbsp;Lucia Siciliani ,&nbsp;Pierpaolo Basile ,&nbsp;Giovanni Semeraro","doi":"10.1016/j.cviu.2025.104618","DOIUrl":"10.1016/j.cviu.2025.104618","url":null,"abstract":"<div><div>The growing popularity of Large Vision-Language Models has highlighted and intensified one of the most well-known challenges in the field of Large Language Models: training is mainly, and most of the time exclusively, conducted on English data. Consequently, the resulting models are more prone to error in non-English tasks, and this issue is exacerbated in multimodal settings that are even more complex and use task-specific datasets. Given this, research on Large Language Models has turned toward adapting them to non-English languages. However, the scarcity of open and curated resources for these languages poses a significant limitation. In this work, we aim to tackle the aforementioned challenge by exploring Large Vision-Language Models adaptation to non-English languages, using machine translation to overcome the lack of curated data. We also analyze how the evaluation of the results is influenced when training a vision-to-text adapter across different languages, examining the performance variations and challenges associated with multilingual adaptation. Finally, we highlight the importance of using open resources to ensure transparency and reproducibility of the results. Following this philosophy, we provide open access to the entire codebase of the adaptation pipeline, along with the trained models and dataset, to foster further research.<span><span><sup>1</sup></span></span></div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"264 ","pages":"Article 104618"},"PeriodicalIF":3.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Vision and Image Understanding
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1