首页 > 最新文献

Journal of Real-Time Image Processing最新文献

英文 中文
YOLOv8s-CFB: a lightweight method for real-time detection of apple fruits in complex environments YOLOv8s-CFB:在复杂环境中实时检测苹果果实的轻量级方法
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-31 DOI: 10.1007/s11554-024-01543-4
Bing Zhao, Aoran Guo, Ruitao Ma, Yanfei Zhang, Jinliang Gong

With the development of apple-picking robots, deep learning models have become essential in apple detection. However, current detection models are often disrupted by complex backgrounds, leading to low recognition accuracy and slow speeds in natural environments. To address these issues, this study proposes an improved model, YOLOv8s-CFB, based on YOLOv8s. This model introduces partial convolution (PConv) in the backbone network, enhances the C2f module, and forms a new architecture, CSPPC, to reduce computational complexity and improve speed. Additionally, FocalModulation technology replaces the original SPPF module to enhance the model’s ability to recognize key areas. Finally, the bidirectional feature pyramid (BiFPN) is introduced to adaptively learn the importance of weights at each scale, effectively retaining multi-scale information through a bidirectional context information transmission mechanism, and improving the model’s detection ability for occluded targets. Test results show that the improved YOLOv8 network achieves better detection performance, with an average accuracy of 93.86%, a parameter volume of 8.83 M, and a detection time of 0.7 ms. The improved algorithm achieves high detection accuracy with a small weight file, making it suitable for deployment on mobile devices. Therefore, the improved model can efficiently and accurately detect apples in complex orchard environments in real time.

随着苹果采摘机器人的发展,深度学习模型已成为苹果检测的关键。然而,目前的检测模型经常会被复杂的背景干扰,导致识别准确率低,在自然环境中识别速度慢。为了解决这些问题,本研究在 YOLOv8s 的基础上提出了一种改进模型 YOLOv8s-CFB。该模型在主干网络中引入了部分卷积(PConv),增强了 C2f 模块,并形成了一种新的架构 CSPPC,以降低计算复杂度并提高速度。此外,FocalModulation 技术取代了原有的 SPPF 模块,增强了模型识别关键区域的能力。最后,引入双向特征金字塔(BiFPN),自适应学习各尺度权重的重要性,通过双向上下文信息传输机制有效保留多尺度信息,提高模型对遮挡目标的检测能力。测试结果表明,改进后的 YOLOv8 网络具有更好的检测性能,平均准确率为 93.86%,参数量为 8.83 M,检测时间为 0.7 ms。改进算法以较小的权重文件实现了较高的检测精度,适合在移动设备上部署。因此,改进后的模型可以在复杂的果园环境中高效、准确地实时检测苹果。
{"title":"YOLOv8s-CFB: a lightweight method for real-time detection of apple fruits in complex environments","authors":"Bing Zhao, Aoran Guo, Ruitao Ma, Yanfei Zhang, Jinliang Gong","doi":"10.1007/s11554-024-01543-4","DOIUrl":"https://doi.org/10.1007/s11554-024-01543-4","url":null,"abstract":"<p>With the development of apple-picking robots, deep learning models have become essential in apple detection. However, current detection models are often disrupted by complex backgrounds, leading to low recognition accuracy and slow speeds in natural environments. To address these issues, this study proposes an improved model, YOLOv8s-CFB, based on YOLOv8s. This model introduces partial convolution (PConv) in the backbone network, enhances the C2f module, and forms a new architecture, CSPPC, to reduce computational complexity and improve speed. Additionally, FocalModulation technology replaces the original SPPF module to enhance the model’s ability to recognize key areas. Finally, the bidirectional feature pyramid (BiFPN) is introduced to adaptively learn the importance of weights at each scale, effectively retaining multi-scale information through a bidirectional context information transmission mechanism, and improving the model’s detection ability for occluded targets. Test results show that the improved YOLOv8 network achieves better detection performance, with an average accuracy of 93.86%, a parameter volume of 8.83 M, and a detection time of 0.7 ms. The improved algorithm achieves high detection accuracy with a small weight file, making it suitable for deployment on mobile devices. Therefore, the improved model can efficiently and accurately detect apples in complex orchard environments in real time.</p>","PeriodicalId":51224,"journal":{"name":"Journal of Real-Time Image Processing","volume":"31 4 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
YOLO9tr: a lightweight model for pavement damage detection utilizing a generalized efficient layer aggregation network and attention mechanism YOLO9tr:利用广义高效层聚合网络和关注机制进行路面损坏检测的轻量级模型
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-31 DOI: 10.1007/s11554-024-01545-2
Sompote Youwai, Achitaphon Chaiyaphat, Pawarotorn Chaipetch

Maintaining road pavement integrity is crucial for ensuring safe and efficient transportation. Conventional methods for assessing pavement condition are often laborious and susceptible to human error. This paper proposes YOLO9tr, a novel lightweight object detection model for pavement damage detection, leveraging the advancements of deep learning. YOLO9tr is based on the YOLOv9 architecture, incorporating a partial attention block that enhances feature extraction and attention mechanisms, leading to improved detection performance in complex scenarios. The model is trained on a comprehensive dataset comprising road damage images from multiple countries. This dataset includes an expanded set of damage categories beyond the standard four types (longitudinal cracks, transverse cracks, alligator cracks, and potholes), providing a more nuanced classification of road damage. This broadened classification range allows for a more accurate and realistic assessment of pavement conditions. Comparative analysis demonstrates YOLO9tr’s superior precision and inference speed compared to state-of-the-art models like YOLOv8, YOLOv9 and YOLOv10, achieving a balance between computational efficiency and detection accuracy. The model achieves a high frame rate of up to 136 FPS, making it suitable for real-time applications such as video surveillance and automated inspection systems. The research presents an ablation study to analyze the impact of architectural modifications and hyperparameter variations on model performance, further validating the effectiveness of the partial attention block. The results highlight YOLO9tr’s potential for practical deployment in real-time pavement condition monitoring, contributing to the development of robust and efficient solutions for maintaining safe and functional road infrastructure.

保持路面的完整性对于确保安全高效的运输至关重要。评估路面状况的传统方法往往费力且容易出现人为错误。本文提出的 YOLO9tr 是一种新颖的轻量级物体检测模型,利用深度学习的先进技术进行路面损坏检测。YOLO9tr 基于 YOLOv9 架构,结合了部分注意力模块,增强了特征提取和注意力机制,从而提高了复杂场景下的检测性能。该模型在由多个国家的道路损坏图像组成的综合数据集上进行了训练。除标准的四种类型(纵向裂缝、横向裂缝、鳄鱼裂缝和坑洞)外,该数据集还包括一组扩展的损坏类别,可对道路损坏进行更细致的分类。分类范围的扩大使得对路面状况的评估更加准确和真实。对比分析表明,与 YOLOv8、YOLOv9 和 YOLOv10 等最先进的模型相比,YOLO9tr 具有更高的精度和推理速度,实现了计算效率和检测精度之间的平衡。该模型的帧速率高达 136 FPS,适合视频监控和自动检测系统等实时应用。研究介绍了一项消融研究,分析了架构修改和超参数变化对模型性能的影响,进一步验证了部分注意力区块的有效性。研究结果凸显了 YOLO9tr 在路面状况实时监控领域的实际应用潜力,有助于开发稳健高效的解决方案,维护道路基础设施的安全和功能。
{"title":"YOLO9tr: a lightweight model for pavement damage detection utilizing a generalized efficient layer aggregation network and attention mechanism","authors":"Sompote Youwai, Achitaphon Chaiyaphat, Pawarotorn Chaipetch","doi":"10.1007/s11554-024-01545-2","DOIUrl":"https://doi.org/10.1007/s11554-024-01545-2","url":null,"abstract":"<p>Maintaining road pavement integrity is crucial for ensuring safe and efficient transportation. Conventional methods for assessing pavement condition are often laborious and susceptible to human error. This paper proposes YOLO9tr, a novel lightweight object detection model for pavement damage detection, leveraging the advancements of deep learning. YOLO9tr is based on the YOLOv9 architecture, incorporating a partial attention block that enhances feature extraction and attention mechanisms, leading to improved detection performance in complex scenarios. The model is trained on a comprehensive dataset comprising road damage images from multiple countries. This dataset includes an expanded set of damage categories beyond the standard four types (longitudinal cracks, transverse cracks, alligator cracks, and potholes), providing a more nuanced classification of road damage. This broadened classification range allows for a more accurate and realistic assessment of pavement conditions. Comparative analysis demonstrates YOLO9tr’s superior precision and inference speed compared to state-of-the-art models like YOLOv8, YOLOv9 and YOLOv10, achieving a balance between computational efficiency and detection accuracy. The model achieves a high frame rate of up to 136 FPS, making it suitable for real-time applications such as video surveillance and automated inspection systems. The research presents an ablation study to analyze the impact of architectural modifications and hyperparameter variations on model performance, further validating the effectiveness of the partial attention block. The results highlight YOLO9tr’s potential for practical deployment in real-time pavement condition monitoring, contributing to the development of robust and efficient solutions for maintaining safe and functional road infrastructure.</p>","PeriodicalId":51224,"journal":{"name":"Journal of Real-Time Image Processing","volume":"39 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ESC-YOLO: optimizing apple fruit recognition with efficient spatial and channel features in YOLOX ESC-YOLO:利用 YOLOX 中的高效空间和通道特征优化苹果果实识别
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-29 DOI: 10.1007/s11554-024-01540-7
Jun Sun, Yifei Peng, Chen Chen, Bing Zhang, Zhaoqi Wu, Yilin Jia, Lei Shi

Accurate localization of apple fruits and recognition of occlusion types in complex orchard environments play an important role in precision agriculture. This work proposes an efficient fruit recognition model called Efficient Spatial and Channel Feature YOLOX (ESC-YOLO). ESC-YOLO is built upon YOLOX and fully leverages and emphasizes spatial channel information, ensuring coherence between global information and local features. The optimization strategies for the backbone network involve adopting EfficientViT as the foundational backbone, integrating Spatial and Channel Reconstruction Convolution (SCConv) into the input stem to reorganize spatial channel features and reduce redundancy, and constructing the Efficient-MBConv module, which is optimally combined with the EfficientViTBlock for feature extraction. The optimization strategies for the neck network involve utilizing the Centralized Feature Pyramid Net (CFPNet) as the neck network and employing a Simple, Parameter-Free Attention Module (SimAM) to enhance model performance. In this work, we adopted the lightweight model of the ESC-YOLO for performance evaluation, namely ESC-YOLO-S. It achieves a 4.26% improvement in Top-1 mean Average Precision (mAP) compared to YOLOX-S and significantly reduces the false and missed detections caused by various types of occlusions. Therefore, the improved model meets the requirements for high-precision identification in complex orchard environments.

在复杂的果园环境中准确定位苹果果实并识别闭塞类型在精准农业中发挥着重要作用。本研究提出了一种名为高效空间和通道特征 YOLOX(ESC-YOLO)的高效水果识别模型。ESC-YOLO 建立在 YOLOX 的基础上,充分利用并强调空间信道信息,确保全局信息与局部特征之间的一致性。骨干网络的优化策略包括采用 EfficientViT 作为基础骨干,将空间和信道重构卷积(SCConv)集成到输入干中,以重组空间信道特征并减少冗余,以及构建 Efficient-MBConv 模块,并将其与 EfficientViTBlock 优化组合,用于特征提取。颈部网络的优化策略包括利用集中特征金字塔网络(CFPNet)作为颈部网络,并采用简单、无参数注意力模块(SimAM)来提高模型性能。在这项工作中,我们采用了 ESC-YOLO 的轻量级模型,即 ESC-YOLO-S 进行性能评估。与 YOLOX-S 相比,它的 Top-1 平均精度(mAP)提高了 4.26%,并显著减少了由各种类型的遮挡引起的误检和漏检。因此,改进后的模型能够满足复杂果园环境中高精度识别的要求。
{"title":"ESC-YOLO: optimizing apple fruit recognition with efficient spatial and channel features in YOLOX","authors":"Jun Sun, Yifei Peng, Chen Chen, Bing Zhang, Zhaoqi Wu, Yilin Jia, Lei Shi","doi":"10.1007/s11554-024-01540-7","DOIUrl":"https://doi.org/10.1007/s11554-024-01540-7","url":null,"abstract":"<p>Accurate localization of apple fruits and recognition of occlusion types in complex orchard environments play an important role in precision agriculture. This work proposes an efficient fruit recognition model called Efficient Spatial and Channel Feature YOLOX (ESC-YOLO). ESC-YOLO is built upon YOLOX and fully leverages and emphasizes spatial channel information, ensuring coherence between global information and local features. The optimization strategies for the backbone network involve adopting EfficientViT as the foundational backbone, integrating Spatial and Channel Reconstruction Convolution (SCConv) into the input stem to reorganize spatial channel features and reduce redundancy, and constructing the Efficient-MBConv module, which is optimally combined with the EfficientViTBlock for feature extraction. The optimization strategies for the neck network involve utilizing the Centralized Feature Pyramid Net (CFPNet) as the neck network and employing a Simple, Parameter-Free Attention Module (SimAM) to enhance model performance. In this work, we adopted the lightweight model of the ESC-YOLO for performance evaluation, namely ESC-YOLO-S. It achieves a 4.26% improvement in Top-1 mean Average Precision (mAP) compared to YOLOX-S and significantly reduces the false and missed detections caused by various types of occlusions. Therefore, the improved model meets the requirements for high-precision identification in complex orchard environments.</p>","PeriodicalId":51224,"journal":{"name":"Journal of Real-Time Image Processing","volume":"1 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Slim-YOLO-PR_KD: an efficient pose-varied object detection method for underground coal mine Slim-YOLO-PR_KD:一种高效的煤矿井下姿态变化物体检测方法
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-28 DOI: 10.1007/s11554-024-01539-0
Huaxing Mu, Jueting Liu, Yanyun Guan, Wei Chen, Tingting Xu, Zehua Wang

Real-time object detection in underground coal mine is a crucial task in the development of AI-assisted supervision systems. Due to the complex environment of the underground coal mine, limited computing resources, and the variability of object poses, the general object detection algorithms cannot provide good performance. Hence, an improved underground pose-varied object detection method named Slim-YOLO-PR_KD has been proposed. By designing an efficient pose-varied attention module (EPA) for the backbone network, providing a receive field block (RFB) module for the neck network, and optimizing the loss function, the underground pose-varied detection model YOLO-PR is obtained, which achieved good accuracy but reduced speed. For YOLO-PR, the study improved the original module by designing RFB_SK, a lightweight C2f_GSG module, a shared parameter detection head and selectively replaced modules to slim down the whole network, resulting in a lightweight detection model Slim-YOLO-PR. By using an attention guided knowledge distillation of underground object detection method and using YOLO-PR as the teacher model, the efficient pose-varied detection model Slim-YOLO-PR_KD for coal mine underground is proposed. The experimental results show that compared with the baseline model, the proposed Slim-YOLO-PR_KD has a faster detection speed, achieving higher detection accuracy while reducing model parameters and computational complexity by 42% and 46% respectively, making it capable of performing real-time underground detection tasks. Compared with other general detection models, Slim-YOLO-PR_KD exhibits excellent performance in real-time pose-varied object detection tasks in complex environments of underground coal mines.

煤矿井下的实时物体检测是人工智能辅助监控系统开发中的一项重要任务。由于煤矿井下环境复杂、计算资源有限、物体姿态多变,一般的物体检测算法无法提供良好的性能。因此,一种名为 Slim-YOLO-PR_KD 的改进型井下姿态多变物体检测方法被提出。通过为骨干网络设计高效的姿态变化注意模块(EPA),为颈部网络提供接收场块模块(RFB),并优化损失函数,得到了地下姿态变化检测模型 YOLO-PR,该模型精度高但速度慢。针对 YOLO-PR,研究通过设计 RFB_SK、轻量级 C2f_GSG 模块、共享参数检测头,对原有模块进行改进,并有选择地替换模块,使整个网络瘦身,从而得到轻量级检测模型 Slim-YOLO-PR。利用注意力引导下的井下物体检测知识提炼方法,以 YOLO-PR 为教师模型,提出了适用于煤矿井下的高效姿态变化检测模型 Slim-YOLO-PR_KD。实验结果表明,与基线模型相比,所提出的 Slim-YOLO-PR_KD 检测速度更快,检测精度更高,同时模型参数和计算复杂度分别降低了 42% 和 46% ,能够胜任井下实时检测任务。与其他一般检测模型相比,Slim-YOLO-PR_KD 在煤矿井下复杂环境下的实时姿态变化物体检测任务中表现出优异的性能。
{"title":"Slim-YOLO-PR_KD: an efficient pose-varied object detection method for underground coal mine","authors":"Huaxing Mu, Jueting Liu, Yanyun Guan, Wei Chen, Tingting Xu, Zehua Wang","doi":"10.1007/s11554-024-01539-0","DOIUrl":"https://doi.org/10.1007/s11554-024-01539-0","url":null,"abstract":"<p>Real-time object detection in underground coal mine is a crucial task in the development of AI-assisted supervision systems. Due to the complex environment of the underground coal mine, limited computing resources, and the variability of object poses, the general object detection algorithms cannot provide good performance. Hence, an improved underground pose-varied object detection method named Slim-YOLO-PR_KD has been proposed. By designing an efficient pose-varied attention module (EPA) for the backbone network, providing a receive field block (RFB) module for the neck network, and optimizing the loss function, the underground pose-varied detection model YOLO-PR is obtained, which achieved good accuracy but reduced speed. For YOLO-PR, the study improved the original module by designing RFB_SK, a lightweight C2f_GSG module, a shared parameter detection head and selectively replaced modules to slim down the whole network, resulting in a lightweight detection model Slim-YOLO-PR. By using an attention guided knowledge distillation of underground object detection method and using YOLO-PR as the teacher model, the efficient pose-varied detection model Slim-YOLO-PR_KD for coal mine underground is proposed. The experimental results show that compared with the baseline model, the proposed Slim-YOLO-PR_KD has a faster detection speed, achieving higher detection accuracy while reducing model parameters and computational complexity by 42% and 46% respectively, making it capable of performing real-time underground detection tasks. Compared with other general detection models, Slim-YOLO-PR_KD exhibits excellent performance in real-time pose-varied object detection tasks in complex environments of underground coal mines.</p>","PeriodicalId":51224,"journal":{"name":"Journal of Real-Time Image Processing","volume":"5 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-efficient real-time visual image adversarial generation and processing algorithm for new energy vehicles 面向新能源汽车的节能型实时视觉图像对抗生成与处理算法
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-28 DOI: 10.1007/s11554-024-01544-3
Yinghuan Li, Jicheng Liu

With the rapid development of deep learning in the last decade, generating and processing real-time images have become one of critical methods in intelligent driving systems for new energy vehicles. However, the real-time images captured by sensors are susceptible to variations in various environments, including different weather and lighting conditions. To enhance the real-time image generation performance for new energy vehicles in complex environments, and improve real-time visual image processing capabilities, this study proposes an energy-efficient real-time visual image adversarial generation and processing algorithm, called as ENV-GAN. It hypothesizes a shared latent domain among mixed image domains after analyzing driving situations under various weather and lighting conditions. Mappings are established between different image domains. Besides, a multi-encoder weight-sharing technique is utilized to enhances the generative adversarial network model. Additionally, the algorithm integrates an attention module to enhance the model’s image generation. Experimental results and analysis demonstrate that the new algorithm outperforms existing algorithms in tasks such as defogging, rain removal, and lighting enhancement, offering high energy efficiency and low energy consumption.

近十年来,随着深度学习的快速发展,生成和处理实时图像已成为新能源汽车智能驾驶系统的关键方法之一。然而,传感器捕捉到的实时图像很容易受到各种环境变化的影响,包括不同的天气和光照条件。为了增强新能源汽车在复杂环境下的实时图像生成性能,提高实时视觉图像处理能力,本研究提出了一种高能效的实时视觉图像对抗生成和处理算法,称为 ENV-GAN。该算法在分析了各种天气和照明条件下的驾驶情况后,假设混合图像域之间存在一个共享潜域。不同图像域之间建立了映射关系。此外,还利用多编码器权重共享技术来增强生成式对抗网络模型。此外,该算法还集成了注意力模块,以增强模型的图像生成能力。实验结果和分析表明,新算法在除雾、除雨和照明增强等任务中的表现优于现有算法,而且能效高、能耗低。
{"title":"Energy-efficient real-time visual image adversarial generation and processing algorithm for new energy vehicles","authors":"Yinghuan Li, Jicheng Liu","doi":"10.1007/s11554-024-01544-3","DOIUrl":"https://doi.org/10.1007/s11554-024-01544-3","url":null,"abstract":"<p>With the rapid development of deep learning in the last decade, generating and processing real-time images have become one of critical methods in intelligent driving systems for new energy vehicles. However, the real-time images captured by sensors are susceptible to variations in various environments, including different weather and lighting conditions. To enhance the real-time image generation performance for new energy vehicles in complex environments, and improve real-time visual image processing capabilities, this study proposes an energy-efficient real-time visual image adversarial generation and processing algorithm, called as ENV-GAN. It hypothesizes a shared latent domain among mixed image domains after analyzing driving situations under various weather and lighting conditions. Mappings are established between different image domains. Besides, a multi-encoder weight-sharing technique is utilized to enhances the generative adversarial network model. Additionally, the algorithm integrates an attention module to enhance the model’s image generation. Experimental results and analysis demonstrate that the new algorithm outperforms existing algorithms in tasks such as defogging, rain removal, and lighting enhancement, offering high energy efficiency and low energy consumption.</p>","PeriodicalId":51224,"journal":{"name":"Journal of Real-Time Image Processing","volume":"1 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hardware-friendly fast rate-distortion optimization algorithm for AV1 encoder 针对 AV1 编码器的硬件友好型快速速率失真优化算法
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-27 DOI: 10.1007/s11554-024-01535-4
Ran Tang, Xiaofeng Huang, Yan Cui, Xinnan Guo, Yang Zhou, Haibing Yin, Chenggang Yan

The rate distortion optimization (RDO) process aims at achieving optimal coding performance by determining the optimal coding mode according to a certain strategy in the AV1 video coding. However, the high computational complexity and strong data dependency in RDO impede real-time applications. To address these issues, a fast RDO algorithm suitable for hardware implementation is proposed. Firstly, we propose a high-frequency coefficients zero-setting approach to optimize the hardware memory occupation. Then, in the rate-distortion calculation stage, an efficient rate estimation method is proposed based on a statistical feature for the number of quantization coefficients, and the distortion estimation method is proposed by considering intrinsic features in the all-zero block. Finally, a reconstruction approximate model is proposed to solve the low parallelism issue caused by the coupling of pixel reconstruction and prediction data. Experimental results show that the proposed algorithm achieves 68.49% and 50.77% time-saving by 2.73% and 2.95% Bjøntegaard delta rate (BD-Rate) increase on average under all intra (AI) and random access (RA) configurations, respectively.

速率失真优化(RDO)过程旨在根据 AV1 视频编码中的特定策略确定最佳编码模式,从而实现最佳编码性能。然而,RDO 的高计算复杂性和强数据依赖性阻碍了实时应用。为了解决这些问题,我们提出了一种适合硬件实现的快速 RDO 算法。首先,我们提出了一种高频系数置零方法,以优化硬件内存占用。然后,在速率-失真计算阶段,基于量化系数数的统计特征提出了一种高效的速率估计方法,并通过考虑全零块的内在特征提出了失真估计方法。最后,提出了一种重构近似模型,以解决像素重构和预测数据耦合导致的低并行性问题。实验结果表明,在所有内部(AI)和随机存取(RA)配置下,所提算法分别平均节省了 68.49% 和 50.77% 的时间,比约恩特加尔德三角率(BD-Rate)分别提高了 2.73% 和 2.95%。
{"title":"Hardware-friendly fast rate-distortion optimization algorithm for AV1 encoder","authors":"Ran Tang, Xiaofeng Huang, Yan Cui, Xinnan Guo, Yang Zhou, Haibing Yin, Chenggang Yan","doi":"10.1007/s11554-024-01535-4","DOIUrl":"https://doi.org/10.1007/s11554-024-01535-4","url":null,"abstract":"<p>The rate distortion optimization (RDO) process aims at achieving optimal coding performance by determining the optimal coding mode according to a certain strategy in the AV1 video coding. However, the high computational complexity and strong data dependency in RDO impede real-time applications. To address these issues, a fast RDO algorithm suitable for hardware implementation is proposed. Firstly, we propose a high-frequency coefficients zero-setting approach to optimize the hardware memory occupation. Then, in the rate-distortion calculation stage, an efficient rate estimation method is proposed based on a statistical feature for the number of quantization coefficients, and the distortion estimation method is proposed by considering intrinsic features in the all-zero block. Finally, a reconstruction approximate model is proposed to solve the low parallelism issue caused by the coupling of pixel reconstruction and prediction data. Experimental results show that the proposed algorithm achieves 68.49% and 50.77% time-saving by 2.73% and 2.95% Bjøntegaard delta rate (BD-Rate) increase on average under all intra (AI) and random access (RA) configurations, respectively.</p>","PeriodicalId":51224,"journal":{"name":"Journal of Real-Time Image Processing","volume":"4 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142227593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient spatio-temporal network for action recognition 用于动作识别的高效时空网络
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-23 DOI: 10.1007/s11554-024-01541-6
Yanxiong Su, Qian Zhao

The input tensor of video data includes temporal, spatial, and channel dimensions, crucial for extracting complementary spatial, temporal, and spatio-temporal features for video action recognition. To efficiently extract and integrate these features, we propose an efficient spatio-temporal module (ESTM) with three pathways dedicated to extracting spatial, temporal, and spatio-temporal features. Each pathway uses the Cross Global Average Pooling (CGAP) module to compress the current dimension, focusing features on the remaining two dimensions. This enhances feature extraction and recognition rates for complex actions. We also introduce a Motion Excitation Module (MEM) to enrich input features by transforming correlations between adjacent frames, reducing computational complexity. Finally, ESTM and MEM are seamlessly integrated into a 2D CNN, forming the efficient spatio-temporal network (ESTN), with minimal impact on network parameters and computational costs. Extensive experiments show that ESTN outperforms state-of-the-art methods on datasets like Something V1 & V2 and HMDB51, validating its effectiveness.

视频数据的输入张量包括时间、空间和通道维度,这对于提取互补的空间、时间和时空特征进行视频动作识别至关重要。为了有效地提取和整合这些特征,我们提出了一种高效时空模块(ESTM),其中有三条路径专门用于提取空间、时间和时空特征。每个路径都使用交叉全局平均池化(CGAP)模块压缩当前维度,将特征集中在其余两个维度上。这提高了复杂动作的特征提取和识别率。我们还引入了运动激励模块(MEM),通过转换相邻帧之间的相关性来丰富输入特征,从而降低计算复杂度。最后,ESTM 和 MEM 被无缝集成到二维 CNN 中,形成高效时空网络(ESTN),对网络参数和计算成本的影响最小。大量实验表明,ESTN 在 Something V1 & V2 和 HMDB51 等数据集上的表现优于最先进的方法,从而验证了其有效性。
{"title":"Efficient spatio-temporal network for action recognition","authors":"Yanxiong Su, Qian Zhao","doi":"10.1007/s11554-024-01541-6","DOIUrl":"https://doi.org/10.1007/s11554-024-01541-6","url":null,"abstract":"<p>The input tensor of video data includes temporal, spatial, and channel dimensions, crucial for extracting complementary spatial, temporal, and spatio-temporal features for video action recognition. To efficiently extract and integrate these features, we propose an efficient spatio-temporal module (ESTM) with three pathways dedicated to extracting spatial, temporal, and spatio-temporal features. Each pathway uses the Cross Global Average Pooling (CGAP) module to compress the current dimension, focusing features on the remaining two dimensions. This enhances feature extraction and recognition rates for complex actions. We also introduce a Motion Excitation Module (MEM) to enrich input features by transforming correlations between adjacent frames, reducing computational complexity. Finally, ESTM and MEM are seamlessly integrated into a 2D CNN, forming the efficient spatio-temporal network (ESTN), with minimal impact on network parameters and computational costs. Extensive experiments show that ESTN outperforms state-of-the-art methods on datasets like Something V1 &amp; V2 and HMDB51, validating its effectiveness.</p>","PeriodicalId":51224,"journal":{"name":"Journal of Real-Time Image Processing","volume":"1 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy efficiency assessment in advanced driver assistance systems with real-time image processing on custom Xilinx DPUs 利用定制 Xilinx DPU 进行实时图像处理,评估高级驾驶辅助系统的能效
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-22 DOI: 10.1007/s11554-024-01538-1
Güner Tatar, Salih Bayar

The rapid advancement in embedded AI, driven by integrating deep neural networks (DNNs) into embedded systems for real-time image and video processing, has been notably pushed by AI-specific platforms like the AMD Xilinx Vitis AI on the MPSoC-FPGA platform. This platform utilizes a configurable Deep Processing Unit (DPU) for scalable resource utilization and operating frequencies. Our study employed a detailed methodology to assess the impact of various DPU configurations and frequencies on resource utilization and energy consumption. The findings reveal that increasing the DPU frequency enhances resource utilization efficiency and improves performance. Conversely, lower frequencies significantly reduce resource utilization, with only a borderline decrease in performance. These trade-offs are influenced not only by frequency but also by variations in DPU parameters. These findings are critical for developing energy-efficient AI-driven systems in Advanced Driver Assistance Systems (ADAS) based on real-time video processing. By leveraging the capabilities of Xilinx Vitis AI deployed on the Kria KV260 MPSoC platform, we explore the intricacies of optimizing energy efficiency through multi-task learning in real-time ADAS applications.

将深度神经网络(DNN)集成到嵌入式系统中进行实时图像和视频处理,推动了嵌入式人工智能的快速发展,而 AMD Xilinx Vitis AI on the MPSoC-FPGA 平台等人工智能专用平台则显著推动了这一发展。该平台利用可配置的深度处理单元(DPU)实现可扩展的资源利用率和工作频率。我们的研究采用了详细的方法来评估各种 DPU 配置和频率对资源利用率和能耗的影响。研究结果表明,提高 DPU 频率可提高资源利用效率并改善性能。相反,频率越低,资源利用率越低,而性能仅有微弱的下降。这些权衡不仅受到频率的影响,还受到 DPU 参数变化的影响。这些发现对于在高级驾驶辅助系统(ADAS)中开发基于实时视频处理的高能效人工智能驱动系统至关重要。通过利用部署在 Kria KV260 MPSoC 平台上的赛灵思 Vitis AI 的功能,我们探索了在实时 ADAS 应用中通过多任务学习优化能效的复杂性。
{"title":"Energy efficiency assessment in advanced driver assistance systems with real-time image processing on custom Xilinx DPUs","authors":"Güner Tatar, Salih Bayar","doi":"10.1007/s11554-024-01538-1","DOIUrl":"https://doi.org/10.1007/s11554-024-01538-1","url":null,"abstract":"<p>The rapid advancement in embedded AI, driven by integrating deep neural networks (DNNs) into embedded systems for real-time image and video processing, has been notably pushed by AI-specific platforms like the AMD Xilinx Vitis AI on the MPSoC-FPGA platform. This platform utilizes a configurable Deep Processing Unit (DPU) for scalable resource utilization and operating frequencies. Our study employed a detailed methodology to assess the impact of various DPU configurations and frequencies on resource utilization and energy consumption. The findings reveal that increasing the DPU frequency enhances resource utilization efficiency and improves performance. Conversely, lower frequencies significantly reduce resource utilization, with only a borderline decrease in performance. These trade-offs are influenced not only by frequency but also by variations in DPU parameters. These findings are critical for developing energy-efficient AI-driven systems in Advanced Driver Assistance Systems (ADAS) based on real-time video processing. By leveraging the capabilities of Xilinx Vitis AI deployed on the Kria KV260 MPSoC platform, we explore the intricacies of optimizing energy efficiency through multi-task learning in real-time ADAS applications.</p>","PeriodicalId":51224,"journal":{"name":"Journal of Real-Time Image Processing","volume":"60 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FPGA-based hardware/firmware co-design for real-time radiometric correction onboard microsatellite 基于 FPGA 的硬件/软件协同设计,用于在微型卫星上进行实时辐射校正
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-20 DOI: 10.1007/s11554-024-01536-3
Youcef Ghelamallah, Azzeddine Rachedi

Remote sensing images are inevitably produced with radiometric artifacts due to the photo-response non-uniformity of charge-coupled device (CCD) sensors. In situations where time constraints demand the prompt acquisition of imaging products, integrating an onboard radiometric correction system becomes essential. This paper advocates for a hardware–firmware co-design approach to achieve radiometric correction within the payload front-end electronics (FEE), leveraging the capabilities of field programmable gate array circuits (FPGA). The selection of an appropriate CCD sensor and optical device is guided by a thorough payload mission analysis, ensuring compliance with the specifications derived from Alsat-1B, the Algerian microsatellite launched in September 2016. Simulation results demonstrate that the designed FPGA firmware effectively controls the CCD sensor and configures its settings to achieve real-time radiometric correction of the acquired pixels in accordance with the mission requirements. To ensure efficient utilization during imaging operations, a hardware solution for onboard storage and in-orbit update of the radiometric coefficients has been considered for the radiometric correction system.

由于电荷耦合器件(CCD)传感器的光反应不均匀性,遥感图像不可避免地会产生辐射伪影。在时间紧迫、需要迅速获取成像产品的情况下,集成一个机载辐射校正系统就变得至关重要。本文主张采用硬件-固件协同设计方法,利用现场可编程门阵列电路(FPGA)的功能,在有效载荷前端电子设备(FEE)内实现辐射校正。在对有效载荷任务进行全面分析的指导下,选择了合适的CCD传感器和光学设备,确保符合2016年9月发射的阿尔及利亚微型卫星Alsat-1B的规格要求。仿真结果表明,所设计的 FPGA 固件可有效控制 CCD 传感器并配置其设置,从而根据任务要求对所获取的像素进行实时辐射校正。为确保成像操作期间的有效利用,考虑为辐射校正系统提供一个用于星载存储和在轨更新辐射系数的硬件解决方案。
{"title":"FPGA-based hardware/firmware co-design for real-time radiometric correction onboard microsatellite","authors":"Youcef Ghelamallah, Azzeddine Rachedi","doi":"10.1007/s11554-024-01536-3","DOIUrl":"https://doi.org/10.1007/s11554-024-01536-3","url":null,"abstract":"<p>Remote sensing images are inevitably produced with radiometric artifacts due to the photo-response non-uniformity of charge-coupled device (CCD) sensors. In situations where time constraints demand the prompt acquisition of imaging products, integrating an onboard radiometric correction system becomes essential. This paper advocates for a hardware–firmware co-design approach to achieve radiometric correction within the payload front-end electronics (FEE), leveraging the capabilities of field programmable gate array circuits (FPGA). The selection of an appropriate CCD sensor and optical device is guided by a thorough payload mission analysis, ensuring compliance with the specifications derived from Alsat-1B, the Algerian microsatellite launched in September 2016. Simulation results demonstrate that the designed FPGA firmware effectively controls the CCD sensor and configures its settings to achieve real-time radiometric correction of the acquired pixels in accordance with the mission requirements. To ensure efficient utilization during imaging operations, a hardware solution for onboard storage and in-orbit update of the radiometric coefficients has been considered for the radiometric correction system.</p>","PeriodicalId":51224,"journal":{"name":"Journal of Real-Time Image Processing","volume":"34 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time low-light video enhancement on smartphones 智能手机上的实时弱光视频增强功能
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-19 DOI: 10.1007/s11554-024-01532-7
Yiming Zhou, Callen MacPhee, Wesley Gunawan, Ali Farahani, Bahram Jalali

Real-time low-light video enhancement on smartphones remains an open challenge due to hardware constraints such as limited sensor size and processing power. While night mode cameras have been introduced in smartphones to acquire high-quality images in light-constrained environments, their usability is restricted to static scenes as the camera must remain stationary for an extended period to leverage long exposure times or burst imaging techniques. Concurrently, significant process has been made in low-light enhancement on images coming out from the camera’s image signal processor (ISP), particularly through neural networks. These methods do not improve the image capture process itself; instead, they function as post-processing techniques to enhance the perceptual brightness and quality of captured imagery for display to human viewers. However, most neural networks are computationally intensive, making their mobile deployment either impractical or requiring considerable engineering efforts. This paper introduces VLight, a novel single-parameter low-light enhancement algorithm that enables real-time video enhancement on smartphones, along with real-time adaptation to changing lighting conditions and user-friendly fine-tuning. Operating as a custom brightness-booster on digital images, VLight provides real-time and device-agnostic enhancement directly on users’ devices. Notably, it delivers real-time low-light enhancement at up to 67 frames per second (FPS) for 4K videos locally on the smartphone.

由于传感器尺寸和处理能力有限等硬件限制,智能手机上的实时弱光视频增强功能仍是一项公开挑战。虽然智能手机已经引入了夜间模式摄像头,以便在光线受限的环境中获取高质量图像,但其可用性仅限于静态场景,因为摄像头必须长时间保持静止,才能利用长曝光时间或连拍成像技术。与此同时,对相机图像信号处理器(ISP)输出的图像进行弱光增强的工作也取得了重大进展,特别是通过神经网络。这些方法并不改进图像捕捉过程本身,而是作为后处理技术,提高捕捉图像的亮度和质量,以便显示给人类观众。然而,大多数神经网络都是计算密集型的,因此在移动设备上部署这些网络要么不切实际,要么需要大量的工程设计工作。本文介绍的 VLight 是一种新颖的单参数弱光增强算法,可在智能手机上实现实时视频增强,并能实时适应不断变化的光照条件和用户友好的微调。作为数字图像的定制亮度增强器,VLight 可直接在用户设备上提供与设备无关的实时增强功能。值得注意的是,它能以高达每秒 67 帧(FPS)的速度在智能手机本地为 4K 视频提供实时弱光增强功能。
{"title":"Real-time low-light video enhancement on smartphones","authors":"Yiming Zhou, Callen MacPhee, Wesley Gunawan, Ali Farahani, Bahram Jalali","doi":"10.1007/s11554-024-01532-7","DOIUrl":"https://doi.org/10.1007/s11554-024-01532-7","url":null,"abstract":"<p>Real-time low-light video enhancement on smartphones remains an open challenge due to hardware constraints such as limited sensor size and processing power. While night mode cameras have been introduced in smartphones to acquire high-quality images in light-constrained environments, their usability is restricted to static scenes as the camera must remain stationary for an extended period to leverage long exposure times or burst imaging techniques. Concurrently, significant process has been made in low-light enhancement on images coming out from the camera’s image signal processor (ISP), particularly through neural networks. These methods do not improve the image capture process itself; instead, they function as post-processing techniques to enhance the perceptual brightness and quality of captured imagery for display to human viewers. However, most neural networks are computationally intensive, making their mobile deployment either impractical or requiring considerable engineering efforts. This paper introduces <i>VLight</i>, a novel single-parameter low-light enhancement algorithm that enables real-time video enhancement on smartphones, along with real-time adaptation to changing lighting conditions and user-friendly fine-tuning. Operating as a custom brightness-booster on digital images, VLight provides real-time and device-agnostic enhancement directly on users’ devices. Notably, it delivers real-time low-light enhancement at up to 67 frames per second (FPS) for 4K videos locally on the smartphone.</p>","PeriodicalId":51224,"journal":{"name":"Journal of Real-Time Image Processing","volume":"22 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Real-Time Image Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1