首页 > 最新文献

Cognitive Robotics最新文献

英文 中文
A defect detection model for transmission line stockbridge dampers based on YOLOv11 with privacy protection 基于YOLOv11的带隐私保护的传输线股桥阻尼器缺陷检测模型
Pub Date : 2026-01-01 DOI: 10.1016/j.cogr.2025.12.002
Junyang Deng , Tian Peng , Song Deng
Stockbridge dampers, critical components of transmission line integrity management, require precise defect detection to ensure grid reliability. While deep learning has emerged as powerful tools for identifying damper defects amidst complex environmental interferences and variable target morphologies, current approaches lack integrated privacy preservation mechanisms– a critical limitation given the fragmented distribution of inspection data across regional utilities, which exacerbates data silos and impedes collaborative model refinement. This study introduces a privacy-aware federated learning framework synergizing an optimized YOLOv11 architecture with systematic privacy-preserving mechanisms for damper defect diagnostics. Our methodology fundamentally redefines data governance by implementing localized client training with the Federated Averaging algorithm (FedAvg) for secure multi-party parameter aggregation, thereby eliminating raw data transmission while ensuring model convergence. Three pivotal contributions distinguish this work. First, we establish the FDRD benchmark dataset comprising real-world transmission line inspection imagery across multiple defect scenarios, creating the first standardized evaluation dataset for damper condition analysis. Second, we develop a federated learning architecture integrating encrypted parameter exchange protocols that jointly address data privacy constraints and regional data fragmentation, enabling collaborative model enhancement without raw data centralization. Third, extensive evaluations demonstrate significant performance improvements over baseline models (YOLOv9/YOLOv10), achieving state-of-the-art metrics including 0.9 mAP50, 0.928 precision, and 0.785 recall while preserving detection robustness comparable to centralized training paradigms. We share our code at https://github.com/yd479/Fed-StockbridgeDefect.git.
斯托克布里奇阻尼器是输电线路完整性管理的关键部件,需要精确的缺陷检测以确保电网的可靠性。虽然深度学习已经成为在复杂环境干扰和可变目标形态中识别阻尼器缺陷的强大工具,但目前的方法缺乏集成的隐私保护机制——这是一个关键的限制,因为检查数据在区域公用事业中的分散分布,加剧了数据孤岛,阻碍了协作模型的改进。本研究引入了一个隐私感知的联邦学习框架,将优化的YOLOv11架构与系统隐私保护机制协同用于阻尼器缺陷诊断。我们的方法从根本上重新定义了数据治理,通过使用联邦平均算法(FedAvg)实现安全多方参数聚合的本地化客户端训练,从而在确保模型收敛的同时消除原始数据传输。三个关键的贡献区分了这项工作。首先,我们建立了FDRD基准数据集,该数据集包含多种缺陷场景下的真实输电线路检测图像,创建了第一个用于阻尼器状态分析的标准化评估数据集。其次,我们开发了一个联邦学习架构,集成了加密参数交换协议,共同解决数据隐私约束和区域数据碎片,在没有原始数据集中的情况下实现协作模型增强。第三,广泛的评估表明,与基线模型(YOLOv9/YOLOv10)相比,该模型的性能有了显著提高,实现了最先进的指标,包括0.9 mAP50、0.928精度和0.785召回率,同时保持了与集中式训练范式相当的检测鲁棒性。我们在https://github.com/yd479/Fed-StockbridgeDefect.git分享我们的代码。
{"title":"A defect detection model for transmission line stockbridge dampers based on YOLOv11 with privacy protection","authors":"Junyang Deng ,&nbsp;Tian Peng ,&nbsp;Song Deng","doi":"10.1016/j.cogr.2025.12.002","DOIUrl":"10.1016/j.cogr.2025.12.002","url":null,"abstract":"<div><div>Stockbridge dampers, critical components of transmission line integrity management, require precise defect detection to ensure grid reliability. While deep learning has emerged as powerful tools for identifying damper defects amidst complex environmental interferences and variable target morphologies, current approaches lack integrated privacy preservation mechanisms– a critical limitation given the fragmented distribution of inspection data across regional utilities, which exacerbates data silos and impedes collaborative model refinement. This study introduces a privacy-aware federated learning framework synergizing an optimized YOLOv11 architecture with systematic privacy-preserving mechanisms for damper defect diagnostics. Our methodology fundamentally redefines data governance by implementing localized client training with the Federated Averaging algorithm (FedAvg) for secure multi-party parameter aggregation, thereby eliminating raw data transmission while ensuring model convergence. Three pivotal contributions distinguish this work. First, we establish the FDRD benchmark dataset comprising real-world transmission line inspection imagery across multiple defect scenarios, creating the first standardized evaluation dataset for damper condition analysis. Second, we develop a federated learning architecture integrating encrypted parameter exchange protocols that jointly address data privacy constraints and regional data fragmentation, enabling collaborative model enhancement without raw data centralization. Third, extensive evaluations demonstrate significant performance improvements over baseline models (YOLOv9/YOLOv10), achieving state-of-the-art metrics including 0.9 mAP50, 0.928 precision, and 0.785 recall while preserving detection robustness comparable to centralized training paradigms. We share our code at <span><span>https://github.com/yd479/Fed-StockbridgeDefect.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"6 ","pages":"Pages 44-54"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DualSpinNet: A crop yield prediction model based on LSTM and GRU DualSpinNet:基于LSTM和GRU的作物产量预测模型
Pub Date : 2026-01-01 DOI: 10.1016/j.cogr.2025.12.001
Tao Zhang , Qiang Yang , Xu Tong , Longhe Hu , Jie Shao
Accurate prediction of crop yield is of great significance for agricultural production and food security. In this paper, we introduce DualSpinNet, a novel model that integrates long short-term memory network (LSTM) and gated recurrent unit (GRU) architectures to address this challenge. This model employs a dual-stream approach to extract temporal features of climate and soil data, utilizing parallel LSTM and GRU layers. These features are subsequently refined through an additional GRU layer to enhance time-series dependencies. The dual recurrent structure allows for more precise extraction and processing of multi-level temporal features, thereby improving the accuracy of prediction. The final yield predictions are generated through a fully connected layer. We trained and validated the model using the Kaggle dataset, and compared its performance with other state-of-the-art models. Empirical results demonstrate that our proposed model achieves lower MSE and MAE, making it effective for crop yield prediction. The proposed method offers a highly accurate tool for agricultural producers and decision-makers, contributing to improvements in crop yield and quality, and promoting food security and sustainable agricultural development.
农作物产量的准确预测对农业生产和粮食安全具有重要意义。在本文中,我们介绍了DualSpinNet,一种集成了长短期记忆网络(LSTM)和门通循环单元(GRU)架构的新模型来解决这一挑战。该模型采用双流方法提取气候和土壤数据的时间特征,利用并行LSTM和GRU层。这些特征随后通过额外的GRU层进行细化,以增强时间序列依赖性。双循环结构允许更精确地提取和处理多层次的时间特征,从而提高预测的准确性。最终产率预测是通过一个完全连接的层生成的。我们使用Kaggle数据集训练和验证了模型,并将其性能与其他最先进的模型进行了比较。实证结果表明,该模型具有较低的MSE和MAE,能够有效地预测作物产量。该方法为农业生产者和决策者提供了高度准确的工具,有助于提高作物产量和质量,促进粮食安全和可持续农业发展。
{"title":"DualSpinNet: A crop yield prediction model based on LSTM and GRU","authors":"Tao Zhang ,&nbsp;Qiang Yang ,&nbsp;Xu Tong ,&nbsp;Longhe Hu ,&nbsp;Jie Shao","doi":"10.1016/j.cogr.2025.12.001","DOIUrl":"10.1016/j.cogr.2025.12.001","url":null,"abstract":"<div><div>Accurate prediction of crop yield is of great significance for agricultural production and food security. In this paper, we introduce DualSpinNet, a novel model that integrates long short-term memory network (LSTM) and gated recurrent unit (GRU) architectures to address this challenge. This model employs a dual-stream approach to extract temporal features of climate and soil data, utilizing parallel LSTM and GRU layers. These features are subsequently refined through an additional GRU layer to enhance time-series dependencies. The dual recurrent structure allows for more precise extraction and processing of multi-level temporal features, thereby improving the accuracy of prediction. The final yield predictions are generated through a fully connected layer. We trained and validated the model using the Kaggle dataset, and compared its performance with other state-of-the-art models. Empirical results demonstrate that our proposed model achieves lower <span><math><mrow><mi>M</mi><mi>S</mi><mi>E</mi></mrow></math></span> and <span><math><mrow><mi>M</mi><mi>A</mi><mi>E</mi></mrow></math></span>, making it effective for crop yield prediction. The proposed method offers a highly accurate tool for agricultural producers and decision-makers, contributing to improvements in crop yield and quality, and promoting food security and sustainable agricultural development.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"6 ","pages":"Pages 32-43"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive multi-feature residual network for lightweight image super-resolution 轻量级图像超分辨率交互式多特征残差网络
Pub Date : 2026-01-01 DOI: 10.1016/j.cogr.2025.12.004
Jiaqi Tang , Sichen Guo , Heyou Chang , Guangwei Gao
Image Super-Resolution (SR) aims to recover high-resolution (HR) images from their low-resolution (LR) counterparts. However, existing SR methods suffer from insufficient multi-level feature interaction, leading to increased computational complexity. To address this limitation, we propose an Interactive Multi-Feature Residual Network (IMFRN) for lightweight SR. To facilitate feature exchange across different levels, we propose the Interactive Distillation Feature Refinement Module (IDFRM), which refines hierarchical features through cross-stage distillation and residual aggregation. IDFRM includes the Multi-Branch Feature Attention Block (MFAB) to integrate spatial and channel information from multiple branches adaptively. Additionally, the Dual Attention Fusion Module (DAFM) dynamically enhances feature representations using complementary attention mechanisms. To strengthen the global context, we integrate a Transformer-based module. Our IMFRN effectively facilitates interaction between features at different levels, achieving state-of-the-art performance with reduced parameters and computational cost.
图像超分辨率(SR)旨在从低分辨率(LR)图像中恢复高分辨率(HR)图像。然而,现有的SR方法存在多级特征交互不足的问题,导致计算复杂度增加。为了解决这一限制,我们提出了一种用于轻量级sr的交互式多特征残差网络(IMFRN)。为了促进不同级别的特征交换,我们提出了交互式蒸馏特征细化模块(IDFRM),该模块通过跨阶段蒸馏和残差聚合来细化分层特征。IDFRM包括多分支特征注意块(Multi-Branch Feature Attention Block, MFAB),用于自适应地整合多分支的空间和信道信息。此外,双注意融合模块(Dual Attention Fusion Module, DAFM)利用互补注意机制动态增强特征表示。为了加强全局上下文,我们集成了一个基于transformer的模块。我们的IMFRN有效地促进了不同层次特征之间的交互,以更少的参数和计算成本实现了最先进的性能。
{"title":"Interactive multi-feature residual network for lightweight image super-resolution","authors":"Jiaqi Tang ,&nbsp;Sichen Guo ,&nbsp;Heyou Chang ,&nbsp;Guangwei Gao","doi":"10.1016/j.cogr.2025.12.004","DOIUrl":"10.1016/j.cogr.2025.12.004","url":null,"abstract":"<div><div>Image Super-Resolution (SR) aims to recover high-resolution (HR) images from their low-resolution (LR) counterparts. However, existing SR methods suffer from insufficient multi-level feature interaction, leading to increased computational complexity. To address this limitation, we propose an Interactive Multi-Feature Residual Network (IMFRN) for lightweight SR. To facilitate feature exchange across different levels, we propose the Interactive Distillation Feature Refinement Module (IDFRM), which refines hierarchical features through cross-stage distillation and residual aggregation. IDFRM includes the Multi-Branch Feature Attention Block (MFAB) to integrate spatial and channel information from multiple branches adaptively. Additionally, the Dual Attention Fusion Module (DAFM) dynamically enhances feature representations using complementary attention mechanisms. To strengthen the global context, we integrate a Transformer-based module. Our IMFRN effectively facilitates interaction between features at different levels, achieving state-of-the-art performance with reduced parameters and computational cost.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"6 ","pages":"Pages 65-77"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ASNet : Attention-guided structure-aware network for low-light image enhancement 用于弱光图像增强的注意引导结构感知网络
Pub Date : 2026-01-01 DOI: 10.1016/j.cogr.2025.12.003
Muzi Wang , Jianping Wang , Zhibin Hao , Jiping Jiang , Zheng Liang , Wenyi Zhao , Weidong Zhang
Low-light image enhancement (LLIE) aims to optimize the visibility and perceptual quality of images described under poor lighting conditions by increasing brightness and restoring fine details. This letter introduces a LLIE method called ASNet, which combines attention guidance with structure-aware mechanisms to perform brightness adjustment. Specifically, we design a Pixel Adjustment Module (PAM), whose core is a Channel-Guided Attention Module (CGAM). It leverages the global contextual information of the input feature maps to guide the network’s attention, enhance key semantic channels, and improve detail preservation in LLIE tasks. CGAM is embedded into the feature fusion stage, significantly enhancing the network’s ability to perceive illumination distribution and respond to structural edges. Additionally, we design a structure-aware illumination adjustment loss to encourage the network to learn natural and structurally consistent illumination mappings. Extensive experiments on four benchmark datasets confirm that ASNet achieves superior performance compared to existing advanced methods in both visual and numerical assessments.
低光图像增强(Low-light image enhancement, LLIE)旨在通过增加亮度和恢复细节来优化在弱光条件下描述的图像的可见性和感知质量。本文介绍了一种称为ASNet的LLIE方法,该方法结合了注意力引导和结构感知机制来进行亮度调节。具体来说,我们设计了一个像素调整模块(PAM),其核心是一个通道引导注意模块(CGAM)。它利用输入特征映射的全局上下文信息来引导网络的注意力,增强关键语义通道,并改善LLIE任务中的细节保存。CGAM被嵌入到特征融合阶段,显著增强了网络感知光照分布和响应结构边缘的能力。此外,我们设计了一个结构感知的照明调整损失,以鼓励网络学习自然和结构一致的照明映射。在四个基准数据集上进行的大量实验证实,与现有的先进方法相比,ASNet在视觉和数值评估方面都具有优越的性能。
{"title":"ASNet : Attention-guided structure-aware network for low-light image enhancement","authors":"Muzi Wang ,&nbsp;Jianping Wang ,&nbsp;Zhibin Hao ,&nbsp;Jiping Jiang ,&nbsp;Zheng Liang ,&nbsp;Wenyi Zhao ,&nbsp;Weidong Zhang","doi":"10.1016/j.cogr.2025.12.003","DOIUrl":"10.1016/j.cogr.2025.12.003","url":null,"abstract":"<div><div>Low-light image enhancement (LLIE) aims to optimize the visibility and perceptual quality of images described under poor lighting conditions by increasing brightness and restoring fine details. This letter introduces a LLIE method called ASNet, which combines attention guidance with structure-aware mechanisms to perform brightness adjustment. Specifically, we design a Pixel Adjustment Module (PAM), whose core is a Channel-Guided Attention Module (CGAM). It leverages the global contextual information of the input feature maps to guide the network’s attention, enhance key semantic channels, and improve detail preservation in LLIE tasks. CGAM is embedded into the feature fusion stage, significantly enhancing the network’s ability to perceive illumination distribution and respond to structural edges. Additionally, we design a structure-aware illumination adjustment loss to encourage the network to learn natural and structurally consistent illumination mappings. Extensive experiments on four benchmark datasets confirm that ASNet achieves superior performance compared to existing advanced methods in both visual and numerical assessments.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"6 ","pages":"Pages 55-64"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Underwater image super-resolution via multi-domain learning 基于多域学习的水下图像超分辨率
Pub Date : 2025-12-05 DOI: 10.1016/j.cogr.2025.11.002
Guanze Shen , Jingxuan Zhang , Zhe Chen
Underwater images suffer from haze effects and low contrast due to wavelength- and distance-dependent scattering and attenuation. These issues present significant challenges for various underwater vision applications. Super resolution (SR) of underwater images offers an effective solution for enhancing both detail refinement and overall image visibility. However, underwater image SR remains challenging owing to the severe degradation of texture and color information. This paper proposes a multidomain learning-based SR network to enhance the performance of underwater image SR. Specifically, we introduce a multidomain encoder network that integrates grayscale and dual-color spaces into a unified framework. This architecture enables our model to simultaneously improve the underwater image quality through texture enhancement and color correction. By incorporating a channel attention mechanism, the most discriminative features extracted from multiple domains can be adaptively weighted and fused. Consequently, our network effectively boosts image resolution and enhances visual quality by leveraging multidomain data and the advantages of learning-based approaches. Experimental results demonstrate the superior performance of the proposed model in underwater image SR.
由于波长和距离相关的散射和衰减,水下图像遭受雾霾效应和低对比度。这些问题对各种水下视觉应用提出了重大挑战。水下图像的超分辨率(SR)为增强细节细化和整体图像可见性提供了有效的解决方案。然而,由于纹理和颜色信息的严重退化,水下图像SR仍然具有挑战性。为了提高水下图像SR的性能,本文提出了一种基于多域学习的SR网络。具体而言,我们引入了一种将灰度空间和双色空间集成到统一框架中的多域编码器网络。这种结构使我们的模型能够通过纹理增强和色彩校正同时提高水下图像质量。通过引入通道注意机制,从多个领域中提取的最具区别性的特征可以自适应加权和融合。因此,我们的网络通过利用多域数据和基于学习的方法的优势,有效地提高了图像分辨率和视觉质量。实验结果表明,该模型在水下图像SR中具有良好的性能。
{"title":"Underwater image super-resolution via multi-domain learning","authors":"Guanze Shen ,&nbsp;Jingxuan Zhang ,&nbsp;Zhe Chen","doi":"10.1016/j.cogr.2025.11.002","DOIUrl":"10.1016/j.cogr.2025.11.002","url":null,"abstract":"<div><div>Underwater images suffer from haze effects and low contrast due to wavelength- and distance-dependent scattering and attenuation. These issues present significant challenges for various underwater vision applications. Super resolution (SR) of underwater images offers an effective solution for enhancing both detail refinement and overall image visibility. However, underwater image SR remains challenging owing to the severe degradation of texture and color information. This paper proposes a multidomain learning-based SR network to enhance the performance of underwater image SR. Specifically, we introduce a multidomain encoder network that integrates grayscale and dual-color spaces into a unified framework. This architecture enables our model to simultaneously improve the underwater image quality through texture enhancement and color correction. By incorporating a channel attention mechanism, the most discriminative features extracted from multiple domains can be adaptively weighted and fused. Consequently, our network effectively boosts image resolution and enhances visual quality by leveraging multidomain data and the advantages of learning-based approaches. Experimental results demonstrate the superior performance of the proposed model in underwater image SR.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"6 ","pages":"Pages 20-31"},"PeriodicalIF":0.0,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-adaptive control of a two-point contact gripper for the precise handling of compliant objects in industrial robotics 工业机器人中精确处理柔顺物体的两点接触夹持器自适应控制
Pub Date : 2025-11-22 DOI: 10.1016/j.cogr.2025.11.001
Sarawit Cheewaratchanon , Jutamanee Auysakul , Paramin Neranon , Arisara Romyen
This paper presents a novel adaptive control framework for robotic grippers that handles a wide range of compliant objects by mimicking human grasping behaviour. The proposed system integrates three distinct control strategies: classical Proportional-Integral-Derivative (PID), Proportional-Integral-based Fuzzy Logic Control (PI-FLC), and Reinforcement Learning (RL) to achieve precise and safe force modulation during object manipulation. A two-finger gripper prototype was developed and experimentally validated using objects of varying stiffness levels, including rigid (iron, plastic) and deformable materials (silicone, foam, sponge). Real-time force control was benchmarked against human-defined reference profiles derived from tactile interaction experiments. The results demonstrate that while PID control provides satisfactory performance for rigid objects, it fails to adapt to nonlinear dynamics in soft materials. In contrast, the PI-Fuzzy and RL controllers can achieve superior force tracking, stability, and generalisation, closely aligning with human-like grasping patterns. The PI-Fuzzy controller excels in rule-based adaptability, while RL shows potential in learning optimal strategies across different compliance levels. This study underscores the significance of integrating classical and intelligent control strategies to improve robotic dexterity, safety, and autonomy, particularly in unstructured environments. The findings have meaningful implications for industrial automation, human-robot collaboration, and the effective manipulation of objects with varying stiffness.
本文提出了一种新的机器人抓手自适应控制框架,该框架通过模仿人类的抓取行为来处理各种柔性物体。该系统集成了三种不同的控制策略:经典的比例-积分-导数(PID)、基于比例-积分的模糊逻辑控制(PI-FLC)和强化学习(RL),以实现物体操作过程中精确和安全的力调制。研究人员开发了一个两指夹持器原型,并使用不同刚度水平的物体进行了实验验证,包括刚性(铁、塑料)和可变形材料(硅胶、泡沫、海绵)。实时力控制的基准是由触觉交互实验得出的人类定义的参考轮廓。结果表明,PID控制对刚性物体具有满意的控制效果,但对软质材料的非线性动力学特性不适应。相比之下,PI-Fuzzy和RL控制器可以实现卓越的力跟踪,稳定性和泛化,与人类的抓取模式紧密一致。pi -模糊控制器在基于规则的自适应方面表现出色,而强化学习在不同遵从性水平上表现出学习最优策略的潜力。该研究强调了集成经典和智能控制策略以提高机器人灵活性、安全性和自主性的重要性,特别是在非结构化环境中。研究结果对工业自动化、人机协作以及对不同刚度物体的有效操纵具有重要意义。
{"title":"Self-adaptive control of a two-point contact gripper for the precise handling of compliant objects in industrial robotics","authors":"Sarawit Cheewaratchanon ,&nbsp;Jutamanee Auysakul ,&nbsp;Paramin Neranon ,&nbsp;Arisara Romyen","doi":"10.1016/j.cogr.2025.11.001","DOIUrl":"10.1016/j.cogr.2025.11.001","url":null,"abstract":"<div><div>This paper presents a novel adaptive control framework for robotic grippers that handles a wide range of compliant objects by mimicking human grasping behaviour. The proposed system integrates three distinct control strategies: classical Proportional-Integral-Derivative (PID), Proportional-Integral-based Fuzzy Logic Control (PI-FLC), and Reinforcement Learning (RL) to achieve precise and safe force modulation during object manipulation. A two-finger gripper prototype was developed and experimentally validated using objects of varying stiffness levels, including rigid (iron, plastic) and deformable materials (silicone, foam, sponge). Real-time force control was benchmarked against human-defined reference profiles derived from tactile interaction experiments. The results demonstrate that while PID control provides satisfactory performance for rigid objects, it fails to adapt to nonlinear dynamics in soft materials. In contrast, the PI-Fuzzy and RL controllers can achieve superior force tracking, stability, and generalisation, closely aligning with human-like grasping patterns. The PI-Fuzzy controller excels in rule-based adaptability, while RL shows potential in learning optimal strategies across different compliance levels. This study underscores the significance of integrating classical and intelligent control strategies to improve robotic dexterity, safety, and autonomy, particularly in unstructured environments. The findings have meaningful implications for industrial automation, human-robot collaboration, and the effective manipulation of objects with varying stiffness.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"6 ","pages":"Pages 1-19"},"PeriodicalIF":0.0,"publicationDate":"2025-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145600536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Navigation control of unmanned aerial vehicles in dynamic collaborative indoor environment using probability fuzzy logic approach 基于概率模糊逻辑的动态协同室内环境下无人机导航控制
Pub Date : 2025-01-01 DOI: 10.1016/j.cogr.2025.02.002
Sameer Agrawal , Bhumeshwar K. Patle , Sudarshan Sanap
The development of drones in various applications makes it essential to address the critical issue of providing collision-free and optimal navigation in uncertain environments. The current research work aims to develop, simulate and experiment with the Probability Fuzzy Logic (PFL) controller for route planning and obstacle avoidance for drones in uncertain static and dynamic environments. The PFL system uses probability-based impact assessment and fuzzy logic rules to deal with unknowns and environmental changes. The fuzzy logic system takes in input about the distance of objects from the drone's front, left, and right sides, as well as the probability of collision based on the drone's speed and how close it is to the obstacles. The set of thirty fuzzy rules based on the distance of the obstacle from front left and right are defined to decide the output, i.e. speed of the drone and heading angle. The simulation environment is developed using MATLAB, with grid-based motion planning that accounts for static and dynamic obstacles. The system's performance is validated through simulations and real-world experiments, comparing path length and travel time. On comparing the simulation and experimental results, the proposed PFL-based controller has been proven to be efficient, accurate, and robust for both static and dynamic and simple to complex environments. The drones can plan the shortest and most collision-free path across all the scenarios, as depicted in the simulation and experimentation results. However, due to communication delay, inaccuracy of sensor response, environmental impact and motor delay, there are slight deviations between the simulation and experimentation values. Upon performing the error analysis, it is found that the error between the simulation and experimental value is within the range of 6.66 % in all the studied scenarios.
无人机在各种应用中的发展使得解决在不确定环境中提供无碰撞和最佳导航的关键问题至关重要。目前的研究工作旨在开发、仿真和实验概率模糊逻辑(PFL)控制器,用于无人机在不确定静态和动态环境下的路线规划和避障。PFL系统采用基于概率的影响评估和模糊逻辑规则来处理未知因素和环境变化。模糊逻辑系统从无人机的前部、左侧和右侧输入物体的距离,以及根据无人机的速度和距离障碍物的远近判断碰撞的概率。根据障碍物与前方左右的距离,定义30条模糊规则集来决定输出,即无人机的速度和航向角度。仿真环境采用MATLAB开发,基于网格的运动规划,考虑了静态和动态障碍物。通过仿真和实际实验验证了系统的性能,比较了路径长度和行程时间。通过仿真与实验结果的对比,证明了所提出的基于pfl的控制器无论在静态还是动态、从简单到复杂的环境中都具有高效、准确和鲁棒性。如仿真和实验结果所示,无人机可以在所有场景中规划最短和最无碰撞的路径。但由于通信延迟、传感器响应不准确、环境影响、电机延迟等因素,仿真值与实验值存在轻微偏差。通过误差分析发现,在所有的研究场景中,仿真值与实验值的误差都在6.66%的范围内。
{"title":"Navigation control of unmanned aerial vehicles in dynamic collaborative indoor environment using probability fuzzy logic approach","authors":"Sameer Agrawal ,&nbsp;Bhumeshwar K. Patle ,&nbsp;Sudarshan Sanap","doi":"10.1016/j.cogr.2025.02.002","DOIUrl":"10.1016/j.cogr.2025.02.002","url":null,"abstract":"<div><div>The development of drones in various applications makes it essential to address the critical issue of providing collision-free and optimal navigation in uncertain environments. The current research work aims to develop, simulate and experiment with the Probability Fuzzy Logic (PFL) controller for route planning and obstacle avoidance for drones in uncertain static and dynamic environments. The PFL system uses probability-based impact assessment and fuzzy logic rules to deal with unknowns and environmental changes. The fuzzy logic system takes in input about the distance of objects from the drone's front, left, and right sides, as well as the probability of collision based on the drone's speed and how close it is to the obstacles. The set of thirty fuzzy rules based on the distance of the obstacle from front left and right are defined to decide the output, i.e. speed of the drone and heading angle. The simulation environment is developed using MATLAB, with grid-based motion planning that accounts for static and dynamic obstacles. The system's performance is validated through simulations and real-world experiments, comparing path length and travel time. On comparing the simulation and experimental results, the proposed PFL-based controller has been proven to be efficient, accurate, and robust for both static and dynamic and simple to complex environments. The drones can plan the shortest and most collision-free path across all the scenarios, as depicted in the simulation and experimentation results. However, due to communication delay, inaccuracy of sensor response, environmental impact and motor delay, there are slight deviations between the simulation and experimentation values. Upon performing the error analysis, it is found that the error between the simulation and experimental value is within the range of 6.66 % in all the studied scenarios.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 86-113"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143579569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TrOCR-driven seal instrument detection and recognition for cognitive robotic applications 用于认知机器人应用的trocr驱动密封仪器检测和识别
Pub Date : 2025-01-01 DOI: 10.1016/j.cogr.2025.10.001
Xuan Jin , Sheng Wang , Miaomiao Zhang , Guoteng Xu , Bingqi Hu , Hanlin Tang
Seal recognition, as a fundamental perception capability, is crucial for enabling cognitive robotic systems to autonomously interact with and understand physical documents in intelligent office and archival environments. While Transformer based optical character recognition (OCR) methods have recently achieved remarkable progress, the recognition of curved and degraded seal text remains a significant challenge. Traditional approaches often rely on cumbersome pipelines with limited robustness, which hampers their integration into robotic cognitive platforms. To address these issues, this paper proposes a novel perception framework that integrates the YOLO-based detection module with the TrOCR recognition model for seal content analysis. The framework enhances robotic perception through three core mechanisms: precise spatial localization, adaptive noise suppression, and efficient curved-text recognition. Experimental results demonstrate that the proposed approach achieves 94.8% accuracy in bent seal text recognition tasks, validating its effectiveness in complex, real-world scenarios. These findings highlight the potential of the method to serve as a reliable perception module within cognitive robotic systems for document understanding and autonomous decision-making.
印章识别作为一种基本的感知能力,对于认知机器人系统在智能办公和档案环境中自主地与物理文件交互和理解至关重要。虽然基于Transformer的光学字符识别(OCR)方法最近取得了显著进展,但弯曲和退化的密封文本的识别仍然是一个重大挑战。传统方法通常依赖于笨重的管道,鲁棒性有限,这阻碍了它们与机器人认知平台的集成。为了解决这些问题,本文提出了一种新的感知框架,该框架将基于yolo的检测模块与TrOCR识别模型集成在一起,用于印章内容分析。该框架通过三个核心机制增强机器人感知:精确的空间定位、自适应噪声抑制和高效的曲线文本识别。实验结果表明,该方法在弯曲印章文本识别任务中准确率达到94.8%,验证了其在复杂现实场景中的有效性。这些发现突出了该方法作为认知机器人系统中文档理解和自主决策的可靠感知模块的潜力。
{"title":"TrOCR-driven seal instrument detection and recognition for cognitive robotic applications","authors":"Xuan Jin ,&nbsp;Sheng Wang ,&nbsp;Miaomiao Zhang ,&nbsp;Guoteng Xu ,&nbsp;Bingqi Hu ,&nbsp;Hanlin Tang","doi":"10.1016/j.cogr.2025.10.001","DOIUrl":"10.1016/j.cogr.2025.10.001","url":null,"abstract":"<div><div>Seal recognition, as a fundamental perception capability, is crucial for enabling cognitive robotic systems to autonomously interact with and understand physical documents in intelligent office and archival environments. While Transformer based optical character recognition (OCR) methods have recently achieved remarkable progress, the recognition of curved and degraded seal text remains a significant challenge. Traditional approaches often rely on cumbersome pipelines with limited robustness, which hampers their integration into robotic cognitive platforms. To address these issues, this paper proposes a novel perception framework that integrates the YOLO-based detection module with the TrOCR recognition model for seal content analysis. The framework enhances robotic perception through three core mechanisms: precise spatial localization, adaptive noise suppression, and efficient curved-text recognition. Experimental results demonstrate that the proposed approach achieves 94.8% accuracy in bent seal text recognition tasks, validating its effectiveness in complex, real-world scenarios. These findings highlight the potential of the method to serve as a reliable perception module within cognitive robotic systems for document understanding and autonomous decision-making.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 286-298"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145361999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Zero-shot intelligent fault diagnosis via semantic fusion embedding 基于语义融合嵌入的零间隔智能故障诊断
Pub Date : 2025-01-01 DOI: 10.1016/j.cogr.2024.12.001
Honghua Xu, Zijian Hu, Ziqiang Xu, Qilong Qian
Most fault diagnosis studies rely on the man-made data collected in laboratory where the operation conditions are under control and stable. However, they can hardly adapt to the practical conditions since the man-made data can hardly model the fault patterns across domains. Aiming to solve this problem, this paper proposes a novel deep fault semantic fusion embedding model (DFSFEM) to realize zero-shot intelligent fault diagnosis. The novelties of DFSFEM lie in two aspects. On the one hand, a novel semantic fusion embedding module is proposed to enhance the representability and adaptability of the feature learning across domains. On the other hand, a neural network-based metric module is designed to replace traditional distance measurements, enhancing the transferring capability between domains. These novelties jointly help DFSFEM provide prominent faithful diagnosis on unseen fault types. Experiments on bearing datasets are conducted to evaluate the zero-shot intelligent fault diagnosis performance. Extensive experimental results and comprehensive analysis demonstrate the superiority of the proposed DFSFEM in terms of diagnosis correctness and adaptability.
大多数故障诊断研究依赖于实验室采集的人工数据,实验室的运行条件是可控和稳定的。然而,由于人工数据难以跨域模拟断层模式,因此难以适应实际情况。针对这一问题,本文提出了一种新的深断层语义融合嵌入模型(DFSFEM)来实现零距智能故障诊断。DFSFEM的新颖之处在于两个方面。一方面,提出了一种新的语义融合嵌入模块,增强了特征学习的可表征性和跨域适应性;另一方面,设计了基于神经网络的度量模块来取代传统的距离度量,增强了域间的传递能力。这些新特性共同帮助DFSFEM对未见过的故障类型提供突出的可靠诊断。在轴承数据集上进行了实验,以评估零射击智能故障诊断的性能。大量的实验结果和综合分析证明了所提出的DFSFEM在诊断正确性和适应性方面的优越性。
{"title":"Zero-shot intelligent fault diagnosis via semantic fusion embedding","authors":"Honghua Xu,&nbsp;Zijian Hu,&nbsp;Ziqiang Xu,&nbsp;Qilong Qian","doi":"10.1016/j.cogr.2024.12.001","DOIUrl":"10.1016/j.cogr.2024.12.001","url":null,"abstract":"<div><div>Most fault diagnosis studies rely on the man-made data collected in laboratory where the operation conditions are under control and stable. However, they can hardly adapt to the practical conditions since the man-made data can hardly model the fault patterns across domains. Aiming to solve this problem, this paper proposes a novel deep fault semantic fusion embedding model (DFSFEM) to realize zero-shot intelligent fault diagnosis. The novelties of DFSFEM lie in two aspects. On the one hand, a novel semantic fusion embedding module is proposed to enhance the representability and adaptability of the feature learning across domains. On the other hand, a neural network-based metric module is designed to replace traditional distance measurements, enhancing the transferring capability between domains. These novelties jointly help DFSFEM provide prominent faithful diagnosis on unseen fault types. Experiments on bearing datasets are conducted to evaluate the zero-shot intelligent fault diagnosis performance. Extensive experimental results and comprehensive analysis demonstrate the superiority of the proposed DFSFEM in terms of diagnosis correctness and adaptability.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 37-47"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143143539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DECTNet: A detail enhanced CNN-Transformer network for single-image deraining DECTNet:一个细节增强的CNN-Transformer网络,用于单图像训练
Pub Date : 2025-01-01 DOI: 10.1016/j.cogr.2024.12.002
Liping Wang , Guangwei Gao
Recently, Convolutional Neural Networks (CNN) and Transformers have been widely adopted in image restoration tasks. While CNNs are highly effective at extracting local information, they struggle to capture global context. Conversely, Transformers excel at capturing global information but often face challenges in preserving spatial and structural details. To address these limitations and harness both global and local features for single-image deraining, we propose a novel approach called the Detail Enhanced CNN-Transformer Network (DECTNet). DECTNet integrates two key components: the Enhanced Residual Feature Distillation Block (ERFDB) and the Dual Attention Spatial Transformer Block (DASTB). In the ERFDB, we introduce a mixed attention mechanism, incorporating channel information-enhanced layers within the residual feature distillation structure. This design facilitates a more effective step-by-step extraction of detailed information, enabling the network to restore fine-grained image details progressively. Additionally, in the DASTB, we utilize spatial attention to refine features obtained from multi-head self-attention, while the feed-forward network leverages channel information to enhance detail preservation further. This complementary use of CNNs and Transformers allows DECTNet to balance global context understanding with detailed spatial restoration. Extensive experiments have demonstrated that DECTNet outperforms some state-of-the-art methods on single-image deraining tasks. Furthermore, our model achieves competitive results on three low-light datasets and a single-image desnowing dataset, highlighting its versatility and effectiveness across different image restoration challenges.
近年来,卷积神经网络(CNN)和变形金刚被广泛应用于图像恢复任务中。虽然cnn在提取局部信息方面非常有效,但它们很难捕捉到全球背景。相反,变形金刚擅长捕捉全局信息,但在保留空间和结构细节方面往往面临挑战。为了解决这些限制并利用全局和局部特征进行单图像脱除,我们提出了一种称为细节增强cnn -变压器网络(DECTNet)的新方法。DECTNet集成了两个关键组件:增强残余特征蒸馏块(ERFDB)和双注意空间变压器块(DASTB)。在ERFDB中,我们引入了一种混合注意机制,在残差特征蒸馏结构中加入了通道信息增强层。这种设计有利于更有效的分步提取细节信息,使网络能够逐步恢复细粒度的图像细节。此外,在DASTB中,我们利用空间注意来细化从多头自注意中获得的特征,而前馈网络利用通道信息进一步增强细节保存。cnn和transformer的这种互补使用使DECTNet能够平衡全球背景理解和详细的空间恢复。大量的实验表明,DECTNet在单图像训练任务上优于一些最先进的方法。此外,我们的模型在三个低光照数据集和一个单图像降雪数据集上取得了具有竞争力的结果,突出了其在不同图像恢复挑战中的通用性和有效性。
{"title":"DECTNet: A detail enhanced CNN-Transformer network for single-image deraining","authors":"Liping Wang ,&nbsp;Guangwei Gao","doi":"10.1016/j.cogr.2024.12.002","DOIUrl":"10.1016/j.cogr.2024.12.002","url":null,"abstract":"<div><div>Recently, Convolutional Neural Networks (CNN) and Transformers have been widely adopted in image restoration tasks. While CNNs are highly effective at extracting local information, they struggle to capture global context. Conversely, Transformers excel at capturing global information but often face challenges in preserving spatial and structural details. To address these limitations and harness both global and local features for single-image deraining, we propose a novel approach called the Detail Enhanced CNN-Transformer Network (DECTNet). DECTNet integrates two key components: the Enhanced Residual Feature Distillation Block (ERFDB) and the Dual Attention Spatial Transformer Block (DASTB). In the ERFDB, we introduce a mixed attention mechanism, incorporating channel information-enhanced layers within the residual feature distillation structure. This design facilitates a more effective step-by-step extraction of detailed information, enabling the network to restore fine-grained image details progressively. Additionally, in the DASTB, we utilize spatial attention to refine features obtained from multi-head self-attention, while the feed-forward network leverages channel information to enhance detail preservation further. This complementary use of CNNs and Transformers allows DECTNet to balance global context understanding with detailed spatial restoration. Extensive experiments have demonstrated that DECTNet outperforms some state-of-the-art methods on single-image deraining tasks. Furthermore, our model achieves competitive results on three low-light datasets and a single-image desnowing dataset, highlighting its versatility and effectiveness across different image restoration challenges.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 48-60"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143143957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Cognitive Robotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1