首页 > 最新文献

Complex & Intelligent Systems最新文献

英文 中文
OMSF2: optimizing multi-scale feature fusion learning for pneumoconiosis staging diagnosis through data specificity augmentation OMSF2:通过数据特异性增强优化多尺度特征融合学习用于尘肺分期诊断
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-30 DOI: 10.1007/s40747-024-01729-0
Xueting Ren, Surong Chu, Guohua Ji, Zijuan Zhao, Juanjuan Zhao, Yan Qiang, Yangyang Wei, Yan Wang

Diagnosing pneumoconiosis is challenging because the lesions are not easily visible on chest X-rays, and the images often lack clear details. Existing deep detection models utilize Feature Pyramid Networks (FPNs) to identify objects at different scales. However, they struggle with insufficient perception of small targets and gradient inconsistency in medical image detection tasks, hindering the full utilization of multi-scale features. To address these issues, we propose an Optimized Multi-Scale Feature Fusion learning framework, OMSF2, which includes the following components: (1) Data specificity augmentation module is introduced to capture intrinsic data representations and introduce diversity by learning morphological variations and lesion locations. (2) Multi-scale feature learning module is utilized that refines micro-feature localization guided by heatmaps, enabling full extraction of multi-directional features of subtle diffuse targets. (3) Multi-scale feature fusion module is employed that facilitates the fusion of high-level and low-level features to better understand subtle differences between disease stages. Notably, this paper innovatively proposes a method for fine learning of low-resolution micro-features in pneumoconiosis, addressing the issue of maintaining cross-layer gradient consistency under multi-scale feature fusion. We established an enhanced pneumoconiosis X-ray dataset to optimize the lesion detection capability of the OMSF2 model. We also introduced an external dataset to evaluate other chest X-rays with complex lesions. On the AP-50 and R-50 evaluation metrics, OMSF2 improved by 3.25% and 3.31% on the internal dataset, and by 2.28% and 0.24% on the external dataset, respectively. Experimental results show that OMSF2 achieves significantly better performance than state-of-the-art baselines in medical image detection tasks.

诊断尘肺病具有挑战性,因为在胸部x光片上不容易看到病变,而且图像通常缺乏清晰的细节。现有的深度检测模型利用特征金字塔网络(Feature Pyramid Networks, FPNs)来识别不同尺度的目标。但在医学图像检测任务中存在小目标感知不足、梯度不一致等问题,阻碍了多尺度特征的充分利用。为了解决这些问题,我们提出了一个优化的多尺度特征融合学习框架OMSF2,该框架包括以下部分:(1)引入数据特异性增强模块,通过学习形态变化和病变位置来捕获内在数据表示并引入多样性。(2)利用多尺度特征学习模块,细化热图引导下的微特征定位,充分提取细微漫射目标的多向特征。(3)采用多尺度特征融合模块,实现高、低尺度特征融合,更好地了解疾病分期之间的细微差异。值得注意的是,本文创新性地提出了一种尘肺低分辨率微特征的精细学习方法,解决了多尺度特征融合下保持跨层梯度一致性的问题。我们建立了一个增强型尘肺x射线数据集,以优化OMSF2模型的病变检测能力。我们还引入了一个外部数据集来评估其他具有复杂病变的胸部x光片。在AP-50和R-50评价指标上,OMSF2在内部数据集上分别提高了3.25%和3.31%,在外部数据集上分别提高了2.28%和0.24%。实验结果表明,OMSF2在医学图像检测任务中的性能明显优于现有基线。
{"title":"OMSF2: optimizing multi-scale feature fusion learning for pneumoconiosis staging diagnosis through data specificity augmentation","authors":"Xueting Ren, Surong Chu, Guohua Ji, Zijuan Zhao, Juanjuan Zhao, Yan Qiang, Yangyang Wei, Yan Wang","doi":"10.1007/s40747-024-01729-0","DOIUrl":"https://doi.org/10.1007/s40747-024-01729-0","url":null,"abstract":"<p>Diagnosing pneumoconiosis is challenging because the lesions are not easily visible on chest X-rays, and the images often lack clear details. Existing deep detection models utilize Feature Pyramid Networks (FPNs) to identify objects at different scales. However, they struggle with insufficient perception of small targets and gradient inconsistency in medical image detection tasks, hindering the full utilization of multi-scale features. To address these issues, we propose an Optimized Multi-Scale Feature Fusion learning framework, OMSF2, which includes the following components: (1) Data specificity augmentation module is introduced to capture intrinsic data representations and introduce diversity by learning morphological variations and lesion locations. (2) Multi-scale feature learning module is utilized that refines micro-feature localization guided by heatmaps, enabling full extraction of multi-directional features of subtle diffuse targets. (3) Multi-scale feature fusion module is employed that facilitates the fusion of high-level and low-level features to better understand subtle differences between disease stages. Notably, this paper innovatively proposes a method for fine learning of low-resolution micro-features in pneumoconiosis, addressing the issue of maintaining cross-layer gradient consistency under multi-scale feature fusion. We established an enhanced pneumoconiosis X-ray dataset to optimize the lesion detection capability of the OMSF2 model. We also introduced an external dataset to evaluate other chest X-rays with complex lesions. On the AP-50 and R-50 evaluation metrics, OMSF2 improved by 3.25% and 3.31% on the internal dataset, and by 2.28% and 0.24% on the external dataset, respectively. Experimental results show that OMSF2 achieves significantly better performance than state-of-the-art baselines in medical image detection tasks.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"33 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142905453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FSPPCFs: a privacy-preserving collaborative filtering recommendation scheme based on fuzzy C-means and Shapley value FSPPCFs:一种基于模糊c均值和Shapley值的隐私保护协同过滤推荐方案
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-30 DOI: 10.1007/s40747-024-01758-9
Weiwei Wang, Wenping Ma, Kun Yan

Collaborative filtering recommendation systems generate personalized recommendation results by analyzing and collaboratively processing a large numerous of user ratings or behavior data. The widespread use of recommendation systems in daily decision-making also brings potential risks of privacy leakage. Recent literature predominantly employs differential privacy to achieve privacy protection, however, many schemes struggle to balance user privacy and recommendation performance effectively. In this work, we present a practical privacy-preserving scheme for user-based collaborative filtering recommendation that utilizes fuzzy C-means clustering and Shapley value, FSPPCFs, aiming to enhance the recommendation performance while ensuring privacy protection. Specifically, (i) we have modified the traditional recommendation scheme by introducing a similarity balance factor integrated into the Pearson similarity algorithm, enhancing recommendation system performance; (ii) FSPPCFs first clusters the dataset through fuzzy C-means clustering and Shapley value, grouping users with similar interests and attributes into the same cluster, thereby providing more accurate data support for recommendations. Then, differential privacy is used to achieve the user’s personal privacy protection when selecting the neighbor set from the target cluster. Finally, it is theoretically proved that our scheme satisfies differential privacy. Experimental results illustrate that our scheme significantly outperforms existing methods.

协同过滤推荐系统通过分析和协同处理大量用户评分或行为数据来生成个性化推荐结果。推荐系统在日常决策中的广泛使用也带来了隐私泄露的潜在风险。最近的文献主要采用差分隐私来实现隐私保护,然而,许多方案难以有效地平衡用户隐私和推荐性能。本文提出了一种实用的基于用户的协同过滤推荐隐私保护方案,该方案利用模糊c均值聚类和Shapley值(fsppcf),在保证隐私保护的同时提高推荐性能。具体而言,(1)对传统推荐方案进行了改进,在Pearson相似度算法中引入了一个相似度平衡因子,提高了推荐系统的性能;(ii) FSPPCFs首先通过模糊c均值聚类和Shapley值对数据集进行聚类,将具有相似兴趣和属性的用户分组到同一聚类中,从而为推荐提供更准确的数据支持。然后,在选择目标簇的邻居集时,使用差分隐私来实现用户的个人隐私保护。最后,从理论上证明了该方案满足差分隐私。实验结果表明,我们的方案明显优于现有的方法。
{"title":"FSPPCFs: a privacy-preserving collaborative filtering recommendation scheme based on fuzzy C-means and Shapley value","authors":"Weiwei Wang, Wenping Ma, Kun Yan","doi":"10.1007/s40747-024-01758-9","DOIUrl":"https://doi.org/10.1007/s40747-024-01758-9","url":null,"abstract":"<p>Collaborative filtering recommendation systems generate personalized recommendation results by analyzing and collaboratively processing a large numerous of user ratings or behavior data. The widespread use of recommendation systems in daily decision-making also brings potential risks of privacy leakage. Recent literature predominantly employs differential privacy to achieve privacy protection, however, many schemes struggle to balance user privacy and recommendation performance effectively. In this work, we present a practical privacy-preserving scheme for user-based collaborative filtering recommendation that utilizes fuzzy C-means clustering and Shapley value, FSPPCFs, aiming to enhance the recommendation performance while ensuring privacy protection. Specifically, (i) we have modified the traditional recommendation scheme by introducing a similarity balance factor integrated into the Pearson similarity algorithm, enhancing recommendation system performance; (ii) FSPPCFs first clusters the dataset through fuzzy C-means clustering and Shapley value, grouping users with similar interests and attributes into the same cluster, thereby providing more accurate data support for recommendations. Then, differential privacy is used to achieve the user’s personal privacy protection when selecting the neighbor set from the target cluster. Finally, it is theoretically proved that our scheme satisfies differential privacy. Experimental results illustrate that our scheme significantly outperforms existing methods.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"13 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142905452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Calibration between a panoramic LiDAR and a limited field-of-view depth camera 全景激光雷达和有限视场深度相机之间的校准
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-30 DOI: 10.1007/s40747-024-01710-x
Weijie Tang, Bin Wang, Longxiang Huang, Xu Yang, Qian Zhang, Sulei Zhu, Yan Ma

Depth cameras and LiDARs are commonly used sensing devices widely applied in fields such as autonomous driving, navigation, and robotics. Precise calibration between the two is crucial for accurate environmental perception and localization. Methods that utilize the point cloud features of both sensors to estimate extrinsic parameters can also be extended to calibrate limited Field-of-View (FOV) LiDARs and panoramic LiDARs, which holds significant research value. However, calibrating the point clouds from two sensors with different fields of view and densities presents challenges. This paper proposes methods for automatic calibration of the two sensors by extracting and registering features in three scenarios: environments with one plane, two planes, and three planes. For the one-plane and two-plane scenarios, we propose constructing feature histogram descriptors based on plane constraints for the remaining points, in addition to planar features, for registration. Experimental results on simulation and real-world data demonstrate that the proposed methods in all three scenarios achieve precise calibration, maintaining average rotation and translation calibration errors within 2 degrees and 0.05 meters respectively for a (360^{circ }) linear LiDAR and a depth camera with a field of view of (100^{circ }) vertically and (70^{circ }) degrees horizontally.

深度相机和激光雷达是常用的传感设备,广泛应用于自动驾驶、导航、机器人等领域。两者之间的精确校准对于准确的环境感知和定位至关重要。利用两种传感器的点云特征估计外部参数的方法也可以扩展到有限视场(FOV)激光雷达和全景激光雷达的标定中,具有重要的研究价值。然而,从两个具有不同视场和密度的传感器标定点云是一个挑战。本文提出了在单平面、两平面和三平面三种环境下,通过特征提取和配准两种传感器的自动标定方法。对于单平面和双平面场景,除了平面特征外,我们提出了基于平面约束的剩余点的特征直方图描述符,用于配准。仿真和实际数据的实验结果表明,对于垂直视场为(100^{circ })度的(360^{circ })线性激光雷达和水平视场为(70^{circ })度的深度相机,所提出的方法在三种场景下都实现了精确的校准,旋转和平移的平均校准误差分别保持在2度和0.05米以内。
{"title":"Calibration between a panoramic LiDAR and a limited field-of-view depth camera","authors":"Weijie Tang, Bin Wang, Longxiang Huang, Xu Yang, Qian Zhang, Sulei Zhu, Yan Ma","doi":"10.1007/s40747-024-01710-x","DOIUrl":"https://doi.org/10.1007/s40747-024-01710-x","url":null,"abstract":"<p>Depth cameras and LiDARs are commonly used sensing devices widely applied in fields such as autonomous driving, navigation, and robotics. Precise calibration between the two is crucial for accurate environmental perception and localization. Methods that utilize the point cloud features of both sensors to estimate extrinsic parameters can also be extended to calibrate limited Field-of-View (FOV) LiDARs and panoramic LiDARs, which holds significant research value. However, calibrating the point clouds from two sensors with different fields of view and densities presents challenges. This paper proposes methods for automatic calibration of the two sensors by extracting and registering features in three scenarios: environments with one plane, two planes, and three planes. For the one-plane and two-plane scenarios, we propose constructing feature histogram descriptors based on plane constraints for the remaining points, in addition to planar features, for registration. Experimental results on simulation and real-world data demonstrate that the proposed methods in all three scenarios achieve precise calibration, maintaining average rotation and translation calibration errors within 2 degrees and 0.05 meters respectively for a <span>(360^{circ })</span> linear LiDAR and a depth camera with a field of view of <span>(100^{circ })</span> vertically and <span>(70^{circ })</span> degrees horizontally.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"48 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142905138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic-enhanced panoptic scene graph generation through hybrid and axial attentions 基于混合关注和轴向关注的语义增强全景场景图生成
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-30 DOI: 10.1007/s40747-024-01746-z
Xinhe Kuang, Yuxin Che, Huiyan Han, Yimin Liu

The generation of panoramic scene graphs represents a cutting-edge challenge in image scene understanding, necessitating sophisticated predictions of both intra-object relationships and interactions between objects and their backgrounds. This complexity tests the limits of current predictive models' ability to discern nuanced relationships within images. Conventional approaches often fail to effectively combine visual and semantic data, leading to predictions that are semantically impoverished. To address these issues, we propose a novel method of semantic-enhanced panoramic scene graph generation through hybrid and axial attentions (PSGAtten). Specifically, a series of hybrid attention networks are stacked within both the object context encoding and relationship context encoding modules, enhancing the refinement and fusion of visual and semantic information. Within the hybrid attention networks, self-attention mechanisms facilitate feature refinement within modalities, while cross-attention mechanisms promote feature fusion across modalities. The axial attention model is further applied to enhance the integration ability of global information. Experimental validation on the PSG dataset confirms that our approach not only surpasses existing methods in generating detailed panoramic scene graphs but also significantly improves recall rates, thereby enhancing the ability to predict relationships in scene graph generation.

全景场景图的生成代表了图像场景理解的前沿挑战,需要对物体内部关系和物体与其背景之间的相互作用进行复杂的预测。这种复杂性考验了当前预测模型辨别图像中细微关系的能力极限。传统的方法往往不能有效地结合视觉和语义数据,导致预测语义贫乏。为了解决这些问题,我们提出了一种通过混合和轴向关注(PSGAtten)生成语义增强全景场景图的新方法。具体而言,在对象上下文编码和关系上下文编码模块中叠加了一系列混合注意网络,增强了视觉和语义信息的细化和融合。在混合注意网络中,自注意机制促进模式内的特征细化,而交叉注意机制促进模式间的特征融合。进一步应用轴向注意模型,增强对全局信息的整合能力。在PSG数据集上的实验验证证实,我们的方法不仅在生成详细的全景场景图方面优于现有方法,而且显著提高了召回率,从而增强了场景图生成中预测关系的能力。
{"title":"Semantic-enhanced panoptic scene graph generation through hybrid and axial attentions","authors":"Xinhe Kuang, Yuxin Che, Huiyan Han, Yimin Liu","doi":"10.1007/s40747-024-01746-z","DOIUrl":"https://doi.org/10.1007/s40747-024-01746-z","url":null,"abstract":"<p>The generation of panoramic scene graphs represents a cutting-edge challenge in image scene understanding, necessitating sophisticated predictions of both intra-object relationships and interactions between objects and their backgrounds. This complexity tests the limits of current predictive models' ability to discern nuanced relationships within images. Conventional approaches often fail to effectively combine visual and semantic data, leading to predictions that are semantically impoverished. To address these issues, we propose a novel method of semantic-enhanced panoramic scene graph generation through hybrid and axial attentions (PSGAtten). Specifically, a series of hybrid attention networks are stacked within both the object context encoding and relationship context encoding modules, enhancing the refinement and fusion of visual and semantic information. Within the hybrid attention networks, self-attention mechanisms facilitate feature refinement within modalities, while cross-attention mechanisms promote feature fusion across modalities. The axial attention model is further applied to enhance the integration ability of global information. Experimental validation on the PSG dataset confirms that our approach not only surpasses existing methods in generating detailed panoramic scene graphs but also significantly improves recall rates, thereby enhancing the ability to predict relationships in scene graph generation.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"65 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142905454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DCTnet: a double-channel transformer network for peach disease detection using UAVs DCTnet:利用无人机进行桃病检测的双通道变压器网络
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-30 DOI: 10.1007/s40747-024-01749-w
Jie Zhang, Dailin Li, Xiaoping Shi, Fengxian Wang, Linwei Li, Yibin Chen

The use of unmanned aerial vehicle (UAV) technology to inspect extensive peach orchards to improve fruit yield and quality is currently a major area of research. The challenge is to accurately detect peach diseases in real time, which is critical to improving peach production. The dense arrangement of peaches and the uneven lighting conditions significantly hamper the accuracy of disease detection. To overcome this, this paper presents a dual-channel transformer network (DCTNet) for peach disease detection. First, an Adaptive Dual-Channel Affine Transformer (ADCT) is developed to efficiently capture key information in images of diseased peaches by integrating features across spatial and channel dimensions within blocks. Next, a Robust Gated Feed Forward Network (RGFN) is constructed to extend the receptive field of the model by improving its context aggregation capabilities. Finally, a Local–Global Network is proposed to fully capture the multi-scale features of peach disease images through a collaborative training approach with input images. Furthermore, a peach disease dataset including different growth stages of peaches is constructed to evaluate the detection performance of the proposed method. Extensive experimental results show that our model outperforms other sophisticated models, achieving an ({AP}_{50}) of 95.57% and an F1 score of 0.91. The integration of this method into UAV systems for surveying large peach orchards ensures accurate disease detection, thereby safeguarding peach production.

利用无人机(UAV)技术对大面积桃园进行巡查以提高果实产量和品质是目前研究的一个重点领域。如何实时准确地检测桃病,是提高桃产量的关键。桃的密集排列和光照条件不均匀严重影响病害检测的准确性。为解决这一问题,本文提出了一种桃病检测的双通道变压器网络(DCTNet)。首先,开发了一种自适应双通道仿射变压器(ADCT),通过整合块内跨空间和通道维度的特征,有效捕获病桃图像中的关键信息。其次,构建鲁棒门控前馈网络(RGFN),通过提高其上下文聚合能力来扩展模型的接受域。最后,提出了一个Local-Global网络,通过与输入图像的协同训练,充分捕捉桃病图像的多尺度特征。在此基础上,构建了包含桃不同生长阶段的桃病数据集,对该方法的检测性能进行了评价。大量的实验结果表明,我们的模型优于其他复杂的模型,达到了95.57的({AP}_{50})% and an F1 score of 0.91. The integration of this method into UAV systems for surveying large peach orchards ensures accurate disease detection, thereby safeguarding peach production.
{"title":"DCTnet: a double-channel transformer network for peach disease detection using UAVs","authors":"Jie Zhang, Dailin Li, Xiaoping Shi, Fengxian Wang, Linwei Li, Yibin Chen","doi":"10.1007/s40747-024-01749-w","DOIUrl":"https://doi.org/10.1007/s40747-024-01749-w","url":null,"abstract":"<p>The use of unmanned aerial vehicle (UAV) technology to inspect extensive peach orchards to improve fruit yield and quality is currently a major area of research. The challenge is to accurately detect peach diseases in real time, which is critical to improving peach production. The dense arrangement of peaches and the uneven lighting conditions significantly hamper the accuracy of disease detection. To overcome this, this paper presents a dual-channel transformer network (DCTNet) for peach disease detection. First, an Adaptive Dual-Channel Affine Transformer (ADCT) is developed to efficiently capture key information in images of diseased peaches by integrating features across spatial and channel dimensions within blocks. Next, a Robust Gated Feed Forward Network (RGFN) is constructed to extend the receptive field of the model by improving its context aggregation capabilities. Finally, a Local–Global Network is proposed to fully capture the multi-scale features of peach disease images through a collaborative training approach with input images. Furthermore, a peach disease dataset including different growth stages of peaches is constructed to evaluate the detection performance of the proposed method. Extensive experimental results show that our model outperforms other sophisticated models, achieving an <span>({AP}_{50})</span> of 95.57% and an F1 score of 0.91. The integration of this method into UAV systems for surveying large peach orchards ensures accurate disease detection, thereby safeguarding peach production.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"160 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142905455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A disturbance suppression second-order penalty-like neurodynamic approach to distributed optimal allocation 分布式最优分配的干扰抑制二阶类惩罚神经动力学方法
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-30 DOI: 10.1007/s40747-024-01732-5
Wenwen Jia, Wenbin Zhao, Sitian Qin

This paper proposes an efficient penalty-like neurodynamic approach modeled as a second-order multi-agent system under external disturbances to investigate the distributed optimal allocation problems. The sliding mode control technology is integrated into the neurodynamic approach for suppressing the influence of the unknown external disturbance on the system’s stability within a fixed time. Then, based on a finite-time tracking technique, resource allocation constraints are handled by using a penalty parameter approach, and their global information is processed in a distributed manner via a multi-agent system. Compared with the existing neurodynamic approaches developed based on the projection theory, the proposed neurodynamic approach utilizes the penalty method and tracking technique to avoid introducing projection operators. Additionally, the convergence of the proposed neurodynamic approach is proven, and an optimal solution to the distributed optimal allocation problem is obtained. Finally, the main results are validated through a numerical simulation involving a power dispatch problem.

本文提出了一种有效的类惩罚神经动力学方法,将其建模为外部干扰下的二阶多智能体系统来研究分布式最优分配问题。将滑模控制技术与神经动力学方法相结合,在一定时间内抑制未知外部干扰对系统稳定性的影响。然后,在有限时间跟踪技术的基础上,采用惩罚参数方法处理资源分配约束,并通过多智能体系统对其全局信息进行分布式处理。与现有基于投影理论的神经动力学方法相比,本文提出的神经动力学方法利用惩罚法和跟踪技术,避免了引入投影算子。此外,还证明了神经动力学方法的收敛性,得到了分布式最优分配问题的最优解。最后,通过一个涉及电力调度问题的数值仿真验证了主要结果。
{"title":"A disturbance suppression second-order penalty-like neurodynamic approach to distributed optimal allocation","authors":"Wenwen Jia, Wenbin Zhao, Sitian Qin","doi":"10.1007/s40747-024-01732-5","DOIUrl":"https://doi.org/10.1007/s40747-024-01732-5","url":null,"abstract":"<p>This paper proposes an efficient penalty-like neurodynamic approach modeled as a second-order multi-agent system under external disturbances to investigate the distributed optimal allocation problems. The sliding mode control technology is integrated into the neurodynamic approach for suppressing the influence of the unknown external disturbance on the system’s stability within a fixed time. Then, based on a finite-time tracking technique, resource allocation constraints are handled by using a penalty parameter approach, and their global information is processed in a distributed manner via a multi-agent system. Compared with the existing neurodynamic approaches developed based on the projection theory, the proposed neurodynamic approach utilizes the penalty method and tracking technique to avoid introducing projection operators. Additionally, the convergence of the proposed neurodynamic approach is proven, and an optimal solution to the distributed optimal allocation problem is obtained. Finally, the main results are validated through a numerical simulation involving a power dispatch problem.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"81 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142905139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual medical image watermarking using SRU-enhanced network and EICC chaotic map 基于sru增强网络和EICC混沌映射的双医学图像水印
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-28 DOI: 10.1007/s40747-024-01723-6
Fei Yan, Zeqian Wang, Kaoru Hirota

With the rapid advancement of next-generation information technology, smart healthcare has seamlessly integrated into various facets of people’s daily routines. Accordingly, enhancing the integrity and security of medical images has gained significant prominence as a crucial research trajectory. In this study, a dual watermarking scheme based on SRU-ConvNeXt V2 (SCNeXt) model and exponential iterative-cubic-cosine (EICC) chaotic map is proposed for medical image integrity verification, tamper localization, and copyright protection. A logo image for integrity verification is embedded into the region of interest within the medical image, and a text image containing copyright information is combined with the feature vectors extracted by SCNeXt for generating zero-watermark information. The security of watermarks is strengthened through a pre-embedding encryption algorithm using the chaotic sequence produced by the EICC map. A comprehensive set of experiments was conducted to validate the proposed dual watermarking scheme. The results demonstrate that the scheme offers significant advantages in both imperceptibility and robustness over traditional methods, including those that rely on manual extraction of medical image features. The scheme achieves excellent imperceptibility, with an average PSNR of 52.29 dB and an average SSIM of 0.9962. Moreover, it displays strong resilience against various attacks, particularly high-strength common and geometric attacks, maintaining an NC value above 0.84, which confirms its robustness. These findings highlight the superiority of the proposed dual watermarking scheme, establishing its potential as an advanced solution for secure and reliable medical image management.

随着新一代信息技术的飞速发展,智能医疗已经无缝融入人们日常生活的方方面面。因此,增强医学图像的完整性和安全性已成为一个重要的研究方向。本文提出了一种基于SRU-ConvNeXt V2 (SCNeXt)模型和指数迭代-立方余弦(EICC)混沌映射的双水印方案,用于医学图像完整性验证、篡改定位和版权保护。在医学图像的感兴趣区域内嵌入用于完整性验证的徽标图像,并将包含版权信息的文本图像与SCNeXt提取的特征向量相结合,生成零水印信息。利用EICC映射产生的混沌序列,通过预嵌入加密算法增强了水印的安全性。通过一组完整的实验验证了所提出的双水印方案。结果表明,与传统方法(包括那些依赖于人工提取医学图像特征的方法)相比,该方案在隐蔽性和鲁棒性方面都具有显著优势。该方案具有较好的隐蔽性,平均PSNR为52.29 dB,平均SSIM为0.9962。此外,它对各种攻击,特别是高强度的普通攻击和几何攻击具有很强的弹性,NC值保持在0.84以上,这证实了它的鲁棒性。这些发现突出了所提出的双水印方案的优越性,确立了其作为安全可靠的医学图像管理先进解决方案的潜力。
{"title":"Dual medical image watermarking using SRU-enhanced network and EICC chaotic map","authors":"Fei Yan, Zeqian Wang, Kaoru Hirota","doi":"10.1007/s40747-024-01723-6","DOIUrl":"https://doi.org/10.1007/s40747-024-01723-6","url":null,"abstract":"<p>With the rapid advancement of next-generation information technology, smart healthcare has seamlessly integrated into various facets of people’s daily routines. Accordingly, enhancing the integrity and security of medical images has gained significant prominence as a crucial research trajectory. In this study, a dual watermarking scheme based on SRU-ConvNeXt V2 (SCNeXt) model and exponential iterative-cubic-cosine (EICC) chaotic map is proposed for medical image integrity verification, tamper localization, and copyright protection. A logo image for integrity verification is embedded into the region of interest within the medical image, and a text image containing copyright information is combined with the feature vectors extracted by SCNeXt for generating zero-watermark information. The security of watermarks is strengthened through a pre-embedding encryption algorithm using the chaotic sequence produced by the EICC map. A comprehensive set of experiments was conducted to validate the proposed dual watermarking scheme. The results demonstrate that the scheme offers significant advantages in both imperceptibility and robustness over traditional methods, including those that rely on manual extraction of medical image features. The scheme achieves excellent imperceptibility, with an average PSNR of 52.29 dB and an average SSIM of 0.9962. Moreover, it displays strong resilience against various attacks, particularly high-strength common and geometric attacks, maintaining an NC value above 0.84, which confirms its robustness. These findings highlight the superiority of the proposed dual watermarking scheme, establishing its potential as an advanced solution for secure and reliable medical image management.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"19 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2024-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142888259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rethinking spatial-temporal contrastive learning for Urban traffic flow forecasting: multi-level augmentation framework 城市交通流预测的时空对比学习再思考:多层次增强框架
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-28 DOI: 10.1007/s40747-024-01754-z
Lin Pan, Qianqian Ren, Zilong Li, Xingfeng Lv

Graph neural networks integrating contrastive learning have attracted growing attention in urban traffic flow forecasting. However, most existing graph contrastive learning methods do not perform well in capturing local–global spatial dependencies or designing contrastive learning schemes for both spatial and temporal dimensions. We argue that these methods can not well extract the spatial-temporal features and are easily affected by data noise. In light of these challenges, this paper proposes an innovative Urban Spatial-Temporal Graph Contrastive Learning framework (UrbanGCL) to improve the accuracy of urban traffic flow forecasting. Specifically, UrbanGCL proposes multi-level data augmentation to address data noise and incompleteness, learn both local and global topology features. The augmented traffic feature matrices and adjacency matrices are then fed into a simple yet effective dual-branch network with shared parameters to capture spatial-temporal correlations within traffic sequences. Moreover, we introduce spatial and temporal contrastive learning auxiliary tasks to alleviate the sparsity of supervision signal and extract the most critical spatial-temporal information. Extensive experimental results on four real-world urban datasets demonstrate that UrbanGCL significantly outperforms other state-of-the-art methods, with the maximum improvement reaching nearly 8.80%.

结合对比学习的图神经网络在城市交通流预测中受到越来越多的关注。然而,大多数现有的图对比学习方法在捕获局部-全局空间依赖关系或设计空间和时间维度的对比学习方案方面表现不佳。这些方法不能很好地提取时空特征,且容易受到数据噪声的影响。针对这些挑战,本文提出了一种创新的城市时空图对比学习框架(UrbanGCL),以提高城市交通流预测的准确性。具体来说,UrbanGCL提出了多层次的数据增强,以解决数据噪声和不完整性问题,同时学习局部和全局拓扑特征。然后将增强的交通特征矩阵和邻接矩阵馈送到具有共享参数的简单而有效的双分支网络中,以捕获交通序列中的时空相关性。此外,我们引入了时空对比学习辅助任务,以减轻监督信号的稀疏性,提取最关键的时空信息。在四个真实城市数据集上的大量实验结果表明,UrbanGCL显著优于其他最先进的方法,最大改进幅度接近8.80%。
{"title":"Rethinking spatial-temporal contrastive learning for Urban traffic flow forecasting: multi-level augmentation framework","authors":"Lin Pan, Qianqian Ren, Zilong Li, Xingfeng Lv","doi":"10.1007/s40747-024-01754-z","DOIUrl":"https://doi.org/10.1007/s40747-024-01754-z","url":null,"abstract":"<p>Graph neural networks integrating contrastive learning have attracted growing attention in urban traffic flow forecasting. However, most existing graph contrastive learning methods do not perform well in capturing local–global spatial dependencies or designing contrastive learning schemes for both spatial and temporal dimensions. We argue that these methods can not well extract the spatial-temporal features and are easily affected by data noise. In light of these challenges, this paper proposes an innovative <u>Urban</u> Spatial-Temporal <u>G</u>raph <u>C</u>ontrastive <u>L</u>earning framework (UrbanGCL) to improve the accuracy of urban traffic flow forecasting. Specifically, UrbanGCL proposes multi-level data augmentation to address data noise and incompleteness, learn both local and global topology features. The augmented traffic feature matrices and adjacency matrices are then fed into a simple yet effective dual-branch network with shared parameters to capture spatial-temporal correlations within traffic sequences. Moreover, we introduce spatial and temporal contrastive learning auxiliary tasks to alleviate the sparsity of supervision signal and extract the most critical spatial-temporal information. Extensive experimental results on four real-world urban datasets demonstrate that UrbanGCL significantly outperforms other state-of-the-art methods, with the maximum improvement reaching nearly 8.80%.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"33 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2024-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142888937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sample-prototype optimal transport-based universal domain adaptation for remote sensing image classification 基于样本-原型最优传输的通用域自适应遥感图像分类
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-28 DOI: 10.1007/s40747-024-01747-y
Xiaosong Chen, Yongbo Yang, Dong Liu, Shengsheng Wang

In recent years, there is a growing interest in domain adaptation for remote sensing image scene classification, particularly in universal domain adaptation, where both source and target domains possess their unique private categories. Existing methods often lack precision on remote sensing image datasets due to insufficient prior knowledge between the source and target domains. This study aims to effectively distinguish between common and private classes despite large intra-class sample discrepancies and small inter-class sample discrepancies in remote sensing images. To address these challenges, we propose Sample-Prototype Optimal Transport-Based Universal Domain Adaptation (SPOT). The proposed approach comprises two key components. Firstly, we utilize an unbalanced optimal transport algorithm along with a sample complement mechanism to identify common and private classes based on the optimal transport assignment matrix. Secondly, we leverage the optimal transport algorithm to enhance discriminability among different classes while promoting similarity within the same class. Experimental results demonstrate that SPOT significantly enhances classification accuracy and robustness in universal domain adaptation for remote sensing images, underscoring its efficacy in addressing the identified challenges.

近年来,人们对遥感图像场景分类的领域自适应越来越感兴趣,特别是通用领域自适应,其中源域和目标域都具有独特的私有类别。由于源域和目标域之间的先验知识不足,现有方法在遥感图像数据集上往往缺乏精度。在遥感图像类内样本差异较大,类间样本差异较小的情况下,本研究旨在有效区分普通类和私人类。为了解决这些挑战,我们提出了基于样本-原型最优传输的通用领域自适应(SPOT)。拟议的方法包括两个关键部分。首先,我们利用非平衡最优传输算法和样本补充机制,基于最优传输分配矩阵来识别公共类和私有类。其次,我们利用最优传输算法来增强不同类之间的可辨别性,同时提高同一类内部的相似性。实验结果表明,SPOT显著提高了遥感图像通用域自适应的分类精度和鲁棒性,突出了其在解决识别挑战方面的有效性。
{"title":"Sample-prototype optimal transport-based universal domain adaptation for remote sensing image classification","authors":"Xiaosong Chen, Yongbo Yang, Dong Liu, Shengsheng Wang","doi":"10.1007/s40747-024-01747-y","DOIUrl":"https://doi.org/10.1007/s40747-024-01747-y","url":null,"abstract":"<p>In recent years, there is a growing interest in domain adaptation for remote sensing image scene classification, particularly in universal domain adaptation, where both source and target domains possess their unique private categories. Existing methods often lack precision on remote sensing image datasets due to insufficient prior knowledge between the source and target domains. This study aims to effectively distinguish between common and private classes despite large intra-class sample discrepancies and small inter-class sample discrepancies in remote sensing images. To address these challenges, we propose Sample-Prototype Optimal Transport-Based Universal Domain Adaptation (SPOT). The proposed approach comprises two key components. Firstly, we utilize an unbalanced optimal transport algorithm along with a sample complement mechanism to identify common and private classes based on the optimal transport assignment matrix. Secondly, we leverage the optimal transport algorithm to enhance discriminability among different classes while promoting similarity within the same class. Experimental results demonstrate that SPOT significantly enhances classification accuracy and robustness in universal domain adaptation for remote sensing images, underscoring its efficacy in addressing the identified challenges.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"313 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2024-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142888350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rain removal method for single image of dual-branch joint network based on sparse transformer 基于稀疏变压器的双支路连接网络单幅图像去雨方法
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-28 DOI: 10.1007/s40747-024-01711-w
Fangfang Qin, Zongpu Jia, Xiaoyan Pang, Shan Zhao

In response to image degradation caused by rain during image acquisition, this paper proposes a rain removal method for single image of dual-branch joint network based on a sparse Transformer (DBSTNet). The developed model comprises a rain removal subnet and a background recovery subnet. The former extracts rain trace information utilizing a rain removal strategy, while the latter employs this information to restore background details. Furthermore, a U-shaped encoder-decoder branch (UEDB) focuses on local features to mitigate the impact of rainwater on background detail textures. UEDB incorporates a feature refinement unit to maximize the contribution of the channel attention mechanism in recovering local detail features. Additionally, since tokens with low relevance in the Transformer may influence image recovery, this study introduces a residual sparse Transformer branch (RSTB) to overcome the limitations of the Convolutional Neural Network’s (CNN’s) receptive field. Indeed, RSTB preserves the most valuable self-attention values for the aggregation of features, facilitating high-quality image reconstruction from a global perspective. Finally, the parallel dual-branch joint module, composed of RSTB and UEDB branches, effectively captures the local context and global structure, culminating in a clear background image. Experimental validation on synthetic and real datasets demonstrates that rain removal images exhibit richer detail information, significantly improving the overall visual effect.

针对图像采集过程中雨水对图像的影响,提出了一种基于稀疏变压器(DBSTNet)的双支路联合网络单幅图像去雨方法。所开发的模型包括一个除雨子网和一个后台恢复子网。前者利用去雨策略提取雨迹信息,而后者利用这些信息来恢复背景细节。此外,u形的编码器-解码器分支(UEDB)侧重于局部特征,以减轻雨水对背景细节纹理的影响。UEDB包含一个特征细化单元,以最大限度地发挥通道注意机制在恢复局部细节特征方面的作用。此外,由于Transformer中相关性较低的令牌可能会影响图像恢复,因此本研究引入了残差稀疏Transformer分支(RSTB)来克服卷积神经网络(CNN)接受域的局限性。事实上,RSTB保留了特征聚合中最有价值的自关注值,便于从全局角度进行高质量的图像重建。最后,由RSTB和UEDB分支组成的并行双分支联合模块,有效地捕捉了局部文脉和全局结构,最终形成清晰的背景图像。在合成数据集和真实数据集上的实验验证表明,降雨图像具有更丰富的细节信息,显著提高了整体视觉效果。
{"title":"Rain removal method for single image of dual-branch joint network based on sparse transformer","authors":"Fangfang Qin, Zongpu Jia, Xiaoyan Pang, Shan Zhao","doi":"10.1007/s40747-024-01711-w","DOIUrl":"https://doi.org/10.1007/s40747-024-01711-w","url":null,"abstract":"<p>In response to image degradation caused by rain during image acquisition, this paper proposes a rain removal method for single image of dual-branch joint network based on a sparse Transformer (DBSTNet). The developed model comprises a rain removal subnet and a background recovery subnet. The former extracts rain trace information utilizing a rain removal strategy, while the latter employs this information to restore background details. Furthermore, a U-shaped encoder-decoder branch (UEDB) focuses on local features to mitigate the impact of rainwater on background detail textures. UEDB incorporates a feature refinement unit to maximize the contribution of the channel attention mechanism in recovering local detail features. Additionally, since tokens with low relevance in the Transformer may influence image recovery, this study introduces a residual sparse Transformer branch (RSTB) to overcome the limitations of the Convolutional Neural Network’s (CNN’s) receptive field. Indeed, RSTB preserves the most valuable self-attention values for the aggregation of features, facilitating high-quality image reconstruction from a global perspective. Finally, the parallel dual-branch joint module, composed of RSTB and UEDB branches, effectively captures the local context and global structure, culminating in a clear background image. Experimental validation on synthetic and real datasets demonstrates that rain removal images exhibit richer detail information, significantly improving the overall visual effect.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"25 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2024-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142888980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Complex & Intelligent Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1