首页 > 最新文献

IEEE Transactions on Circuits and Systems for Video Technology最新文献

英文 中文
IEEE Circuits and Systems Society Information IEEE电路与系统学会信息
IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-27 DOI: 10.1109/TCSVT.2025.3647242
{"title":"IEEE Circuits and Systems Society Information","authors":"","doi":"10.1109/TCSVT.2025.3647242","DOIUrl":"https://doi.org/10.1109/TCSVT.2025.3647242","url":null,"abstract":"","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"36 1","pages":"C3-C3"},"PeriodicalIF":11.1,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11365555","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146049285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2025 Index IEEE Transactions on Circuits and Systems for Video Technology 视频技术电路与系统学报
IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-14 DOI: 10.1109/TCSVT.2026.3652903
{"title":"2025 Index IEEE Transactions on Circuits and Systems for Video Technology","authors":"","doi":"10.1109/TCSVT.2026.3652903","DOIUrl":"https://doi.org/10.1109/TCSVT.2026.3652903","url":null,"abstract":"","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 12","pages":"12925-13126"},"PeriodicalIF":11.1,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11352539","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Circuits and Systems Society Information IEEE电路与系统学会信息
IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-05 DOI: 10.1109/TCSVT.2025.3634931
{"title":"IEEE Circuits and Systems Society Information","authors":"","doi":"10.1109/TCSVT.2025.3634931","DOIUrl":"https://doi.org/10.1109/TCSVT.2025.3634931","url":null,"abstract":"","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 12","pages":"C3-C3"},"PeriodicalIF":11.1,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11278896","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Circuits and Systems Society Information IEEE电路与系统学会信息
IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-31 DOI: 10.1109/TCSVT.2025.3623686
{"title":"IEEE Circuits and Systems Society Information","authors":"","doi":"10.1109/TCSVT.2025.3623686","DOIUrl":"https://doi.org/10.1109/TCSVT.2025.3623686","url":null,"abstract":"","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 11","pages":"C3-C3"},"PeriodicalIF":11.1,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11223417","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Circuits and Systems Society Information IEEE电路与系统学会信息
IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-03 DOI: 10.1109/TCSVT.2025.3612531
{"title":"IEEE Circuits and Systems Society Information","authors":"","doi":"10.1109/TCSVT.2025.3612531","DOIUrl":"https://doi.org/10.1109/TCSVT.2025.3612531","url":null,"abstract":"","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 10","pages":"C3-C3"},"PeriodicalIF":11.1,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11192813","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145210051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Circuits and Systems Society Information IEEE电路与系统学会信息
IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-09 DOI: 10.1109/TCSVT.2025.3600974
{"title":"IEEE Circuits and Systems Society Information","authors":"","doi":"10.1109/TCSVT.2025.3600974","DOIUrl":"https://doi.org/10.1109/TCSVT.2025.3600974","url":null,"abstract":"","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 9","pages":"C3-C3"},"PeriodicalIF":11.1,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11154653","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145021441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Circuits and Systems for Video Technology Publication Information IEEE视频技术电路与系统汇刊
IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-09 DOI: 10.1109/TCSVT.2025.3600972
{"title":"IEEE Transactions on Circuits and Systems for Video Technology Publication Information","authors":"","doi":"10.1109/TCSVT.2025.3600972","DOIUrl":"https://doi.org/10.1109/TCSVT.2025.3600972","url":null,"abstract":"","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 9","pages":"C2-C2"},"PeriodicalIF":11.1,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11154656","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145021215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mining Temporal Priors for Template-Generated Video Compression 挖掘时间先验的模板生成视频压缩
IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-08-18 DOI: 10.1109/TCSVT.2025.3599239
Feng Xing;Yingwen Zhang;Meng Wang;Hengyu Man;Yongbing Zhang;Shiqi Wang;Xiaopeng Fan;Wen Gao
The popularity of template-generated videos has recently experienced a significant increase on social media platforms. In general, videos from the same template share similar temporal characteristics, which are unfortunately ignored in the current compression schemes. In view of this, we aim to examine how such temporal priors from templates can be effectively utilized during the compression process for template-generated videos. First, a comprehensive statistical analysis is conducted, revealing that the coding decisions, including the merge, non-affine, and motion information, across template-generated videos are strongly correlated. Subsequently, leveraging such correlations as prior knowledge, a simple yet effective prior-driven compression scheme for template-generated videos is proposed. In particular, a mode decision pruning algorithm is devised to dynamically skip unnecessarily advanced motion vector prediction (AMVP) or affine AMVP decisions. Moreover, an improved AMVP motion estimation algorithm is applied to further accelerate reference frame selection and the motion estimation process. Experimental results on the versatile video coding (VVC) platform VTM-23.0 demonstrate that the proposed scheme achieves moderate time reductions of 14.31% and 14.99% under the Low-Delay P (LDP) and Low-Delay B (LDB) configurations, respectively, while maintaining negligible increases in Bjøntegaard Delta Rate (BD-Rate) of 0.15% and 0.18%, respectively.
最近,在社交媒体平台上,模板生成视频的受欢迎程度显著增加。通常,来自同一模板的视频具有相似的时间特征,不幸的是,这些特征在当前的压缩方案中被忽略了。鉴于此,我们的目的是研究如何在模板生成视频的压缩过程中有效地利用模板的时间先验。首先,进行了全面的统计分析,揭示了编码决策,包括合并,非仿射和运动信息,跨模板生成的视频是强相关的。随后,利用先验知识的相关性,提出了一种简单而有效的先验驱动的模板生成视频压缩方案。特别地,设计了一种模式决策修剪算法来动态跳过不必要的高级运动向量预测(AMVP)或仿射AMVP决策。此外,采用改进的AMVP运动估计算法,进一步加快了参考帧的选择和运动估计过程。在通用视频编码(VVC)平台VTM-23.0上的实验结果表明,该方案在低延迟P (LDP)和低延迟B (LDB)配置下分别实现了14.31%和14.99%的适度时间缩减,同时保持了可忽略的bj ~ ntegaard Delta Rate (BD-Rate)的增长,分别为0.15%和0.18%。
{"title":"Mining Temporal Priors for Template-Generated Video Compression","authors":"Feng Xing;Yingwen Zhang;Meng Wang;Hengyu Man;Yongbing Zhang;Shiqi Wang;Xiaopeng Fan;Wen Gao","doi":"10.1109/TCSVT.2025.3599239","DOIUrl":"https://doi.org/10.1109/TCSVT.2025.3599239","url":null,"abstract":"The popularity of template-generated videos has recently experienced a significant increase on social media platforms. In general, videos from the same template share similar temporal characteristics, which are unfortunately ignored in the current compression schemes. In view of this, we aim to examine how such temporal priors from templates can be effectively utilized during the compression process for template-generated videos. First, a comprehensive statistical analysis is conducted, revealing that the coding decisions, including the merge, non-affine, and motion information, across template-generated videos are strongly correlated. Subsequently, leveraging such correlations as prior knowledge, a simple yet effective prior-driven compression scheme for template-generated videos is proposed. In particular, a mode decision pruning algorithm is devised to dynamically skip unnecessarily advanced motion vector prediction (AMVP) or affine AMVP decisions. Moreover, an improved AMVP motion estimation algorithm is applied to further accelerate reference frame selection and the motion estimation process. Experimental results on the versatile video coding (VVC) platform VTM-23.0 demonstrate that the proposed scheme achieves moderate time reductions of 14.31% and 14.99% under the Low-Delay P (LDP) and Low-Delay B (LDB) configurations, respectively, while maintaining negligible increases in Bjøntegaard Delta Rate (BD-Rate) of 0.15% and 0.18%, respectively.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"36 1","pages":"1160-1172"},"PeriodicalIF":11.1,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146049298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An End-to-End Framework for Joint Makeup Style Transfer and Image Steganography 联合化妆风格转移和图像隐写的端到端框架
IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-08-18 DOI: 10.1109/TCSVT.2025.3599551
Meihong Yang;Ziyi Feng;Bin Ma;Jian Xu;Yongjin Xian;Linna Zhou
Existing image steganography schemes always introduce obvious modification traces to the cover image, resulting in the risk of secret information leakage. To address this issue, an end-to-end framework for joint makeup style transfer and image steganography is proposed in this paper to achieve imperceptible higher-capacity data hiding. In the scheme, a Parsing-guided Semantic Feature Alignment (PSFA) module is designed to transfer the style of a makeup image to an object non-makeup image, thereby generating a content-style integrated feature matrix. Meanwhile, a Multi-Scale Feature Fusion and Data Embedding (MFFDE) module was devised to encode the secret image into its latent features and fuse them with the generated content-style integrated feature matrix, as well as the non-makeup image features across multiple scales, to achieve the makeup-stego image. As a result, the style of the makeup image is well transformed and the secret image is imperceptibly embedded simultaneously without directly modifying the pixels of the original non-makeup image. Additionally, a Residual-aware Information Compensation Network (RICN) is developed to compensate the loss of the secret image arising from the multilevel data embedding, thereby further enhancing the quality of the reconstructed secret image. Experimental results show that the proposed scheme achieves superior steganalysis resistance capability and visual quality in both makeup-stego images and recovered secret images, compared with other state-of-the-art schemes.
现有的图像隐写方案往往会对封面图像引入明显的修改痕迹,存在秘密信息泄露的风险。为了解决这一问题,本文提出了一种端到端的组合样式传输和图像隐写框架,以实现难以察觉的高容量数据隐藏。在该方案中,设计了一个解析引导语义特征对齐(PSFA)模块,将化妆图像的风格转换为对象非化妆图像,从而生成内容风格的集成特征矩阵。同时,设计了多尺度特征融合与数据嵌入(MFFDE)模块,将隐密图像编码为隐密图像的潜在特征,并与生成的内容式集成特征矩阵以及跨多尺度的非化妆图像特征融合,实现隐密图像的化妆。这样在不直接修改原始素颜图像像素的情况下,很好地变换了化妆图像的风格,并在不知不觉中同时嵌入了秘密图像。此外,提出了残差感知信息补偿网络(RICN)来补偿多层数据嵌入所造成的秘密图像的损失,从而进一步提高了重建秘密图像的质量。实验结果表明,与现有的隐写算法相比,该算法在隐写修复图像和恢复秘密图像上都具有较好的抗隐写能力和视觉质量。
{"title":"An End-to-End Framework for Joint Makeup Style Transfer and Image Steganography","authors":"Meihong Yang;Ziyi Feng;Bin Ma;Jian Xu;Yongjin Xian;Linna Zhou","doi":"10.1109/TCSVT.2025.3599551","DOIUrl":"https://doi.org/10.1109/TCSVT.2025.3599551","url":null,"abstract":"Existing image steganography schemes always introduce obvious modification traces to the cover image, resulting in the risk of secret information leakage. To address this issue, an end-to-end framework for joint makeup style transfer and image steganography is proposed in this paper to achieve imperceptible higher-capacity data hiding. In the scheme, a Parsing-guided Semantic Feature Alignment (PSFA) module is designed to transfer the style of a makeup image to an object non-makeup image, thereby generating a content-style integrated feature matrix. Meanwhile, a Multi-Scale Feature Fusion and Data Embedding (MFFDE) module was devised to encode the secret image into its latent features and fuse them with the generated content-style integrated feature matrix, as well as the non-makeup image features across multiple scales, to achieve the makeup-stego image. As a result, the style of the makeup image is well transformed and the secret image is imperceptibly embedded simultaneously without directly modifying the pixels of the original non-makeup image. Additionally, a Residual-aware Information Compensation Network (RICN) is developed to compensate the loss of the secret image arising from the multilevel data embedding, thereby further enhancing the quality of the reconstructed secret image. Experimental results show that the proposed scheme achieves superior steganalysis resistance capability and visual quality in both makeup-stego images and recovered secret images, compared with other state-of-the-art schemes.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"36 1","pages":"1293-1308"},"PeriodicalIF":11.1,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146049267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fine-Detailed Facial Sketch-to-Photo Synthesis With Detail-Enhanced Codebook Priors 精细的面部素描到照片合成与细节增强的码本先验
IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-08-12 DOI: 10.1109/TCSVT.2025.3598016
Mingrui Zhu;Jianhang Chen;Xin Wei;Nannan Wang;Xinbo Gao
Generating high-quality facial photos from fine-detailed sketches is a long-standing research topic that remains unsolved. The scarcity of large-scale paired data due to the cost of acquiring hand-drawn sketches poses a major challenge. Existing methods either lose identity information with oversimplified representations, or rely on costly inversion and strict alignment when using StyleGAN-based priors, limiting their practical applicability. Our primary finding in this work is that the discrete codebook and decoder trained through self-reconstruction in the photo domain can learn rich priors, helping to reduce ambiguity in cross-domain mapping even with current small-scale paired datasets. Based on this, a cross-domain mapping network can be directly constructed. However, empirical findings indicate that using the discrete codebook for cross-domain mapping often results in unrealistic textures and distorted spatial layouts. Therefore, we propose a Hierarchical Adaptive Texture-Spatial Correction (HATSC) module to correct the flaws in texture and spatial layouts. Besides, we introduce a Saliency-based Key Details Enhancement (SKDE) module to further enhance the synthesis quality. Overall, we present a “reconstruct-cross-enhance” pipeline for synthesizing facial photos from fine-detailed sketches. Experiments demonstrate that our method generates high-quality facial photos and significantly outperforms previous approaches across a wide range of challenging benchmarks. The code is publicly available at: https://github.com/Gardenia-chen/DECP
从精细的草图中生成高质量的面部照片是一个长期未解决的研究课题。由于获取手绘草图的成本,大规模配对数据的稀缺性构成了一个主要挑战。现有方法在使用基于stylegan的先验时,要么过于简化表示而丢失身份信息,要么依赖昂贵的反演和严格的对齐,限制了它们的实际适用性。我们在这项工作中的主要发现是,通过在照片域进行自重建训练的离散码本和解码器可以学习丰富的先验,即使使用当前的小规模配对数据集,也有助于减少跨域映射中的歧义。在此基础上,可以直接构建跨域映射网络。然而,实证结果表明,使用离散码本进行跨域映射往往会导致不真实的纹理和扭曲的空间布局。因此,我们提出了一种分层自适应纹理空间校正(HATSC)模块来校正纹理和空间布局的缺陷。此外,我们还引入了基于显著性的关键细节增强(SKDE)模块来进一步提高合成质量。总的来说,我们提出了一个“重建-交叉增强”的管道,用于从精细的草图合成面部照片。实验表明,我们的方法可以生成高质量的面部照片,并且在一系列具有挑战性的基准测试中显著优于以前的方法。该代码可在https://github.com/Gardenia-chen/DECP公开获取
{"title":"Fine-Detailed Facial Sketch-to-Photo Synthesis With Detail-Enhanced Codebook Priors","authors":"Mingrui Zhu;Jianhang Chen;Xin Wei;Nannan Wang;Xinbo Gao","doi":"10.1109/TCSVT.2025.3598016","DOIUrl":"https://doi.org/10.1109/TCSVT.2025.3598016","url":null,"abstract":"Generating high-quality facial photos from fine-detailed sketches is a long-standing research topic that remains unsolved. The scarcity of large-scale paired data due to the cost of acquiring hand-drawn sketches poses a major challenge. Existing methods either lose identity information with oversimplified representations, or rely on costly inversion and strict alignment when using StyleGAN-based priors, limiting their practical applicability. Our primary finding in this work is that the discrete codebook and decoder trained through self-reconstruction in the photo domain can learn rich priors, helping to reduce ambiguity in cross-domain mapping even with current small-scale paired datasets. Based on this, a cross-domain mapping network can be directly constructed. However, empirical findings indicate that using the discrete codebook for cross-domain mapping often results in unrealistic textures and distorted spatial layouts. Therefore, we propose a Hierarchical Adaptive Texture-Spatial Correction (HATSC) module to correct the flaws in texture and spatial layouts. Besides, we introduce a Saliency-based Key Details Enhancement (SKDE) module to further enhance the synthesis quality. Overall, we present a “reconstruct-cross-enhance” pipeline for synthesizing facial photos from fine-detailed sketches. Experiments demonstrate that our method generates high-quality facial photos and significantly outperforms previous approaches across a wide range of challenging benchmarks. The code is publicly available at: <uri>https://github.com/Gardenia-chen/DECP</uri>","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"36 1","pages":"1075-1088"},"PeriodicalIF":11.1,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146049263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Circuits and Systems for Video Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1