首页 > 最新文献

IEEE Transactions on Emerging Topics in Computational Intelligence最新文献

英文 中文
Co-Occurrence Relationship Driven Hierarchical Attention Network for Brain CT Report Generation 用于生成脑 CT 报告的共现关系驱动的层次注意网络
IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-18 DOI: 10.1109/TETCI.2024.3413002
Xiaodan Zhang;Shixin Dou;Junzhong Ji;Ying Liu;Zheng Wang
Automatic generation of medical reports for Brain Computed Tomography (CT) imaging is crucial for helping radiologists make more accurate clinical diagnoses efficiently. Brain CT imaging typically contains rich pathological information, including common pathologies that often co-occur in one report and rare pathologies that appear in medical reports with lower frequency. However, current research ignores the potential co-occurrence between common pathologies and pays insufficient attention to rare pathologies, severely restricting the accuracy and diversity of the generated medical reports. In this paper, we propose a Co-occurrence Relationship Driven Hierarchical Attention Network (CRHAN) to improve Brain CT report generation by mining common and rare pathologies in Brain CT imaging. Specifically, the proposed CRHAN follows a general encoder-decoder framework with two novel attention modules. In the encoder, a co-occurrence relationship guided semantic attention (CRSA) module is proposed to extract the critical semantic features by embedding the co-occurrence relationship of common pathologies into semantic attention. In the decoder, a common-rare topic driven visual attention (CRVA) module is proposed to fuse the common and rare semantic features as sentence topic vectors, and then guide the visual attention to capture important lesion features for medical report generation. Experiments on the Brain CT dataset demonstrate the effectiveness of the proposed method.
自动生成脑计算机断层扫描(CT)成像医疗报告对于帮助放射科医生高效做出更准确的临床诊断至关重要。脑 CT 成像通常包含丰富的病理信息,包括经常在一份报告中同时出现的常见病理和在医疗报告中出现频率较低的罕见病理。然而,目前的研究忽视了常见病理之间潜在的共存性,对罕见病理关注不够,严重限制了生成医疗报告的准确性和多样性。在本文中,我们提出了一种共现关系驱动的分层注意力网络(CRHAN),通过挖掘脑 CT 成像中常见和罕见的病变来改进脑 CT 报告的生成。具体来说,所提出的 CRHAN 遵循一般的编码器-解码器框架,包含两个新颖的注意模块。在编码器中,提出了共现关系引导的语义注意(CRSA)模块,通过将常见病变的共现关系嵌入到语义注意中来提取关键语义特征。在解码器中,提出了共现稀有主题驱动视觉注意力(CRVA)模块,将共现和稀有语义特征融合为句子主题向量,然后引导视觉注意力捕捉重要病变特征,用于生成医疗报告。在脑 CT 数据集上的实验证明了所提方法的有效性。
{"title":"Co-Occurrence Relationship Driven Hierarchical Attention Network for Brain CT Report Generation","authors":"Xiaodan Zhang;Shixin Dou;Junzhong Ji;Ying Liu;Zheng Wang","doi":"10.1109/TETCI.2024.3413002","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3413002","url":null,"abstract":"Automatic generation of medical reports for Brain Computed Tomography (CT) imaging is crucial for helping radiologists make more accurate clinical diagnoses efficiently. Brain CT imaging typically contains rich pathological information, including common pathologies that often co-occur in one report and rare pathologies that appear in medical reports with lower frequency. However, current research ignores the potential co-occurrence between common pathologies and pays insufficient attention to rare pathologies, severely restricting the accuracy and diversity of the generated medical reports. In this paper, we propose a Co-occurrence Relationship Driven Hierarchical Attention Network (CRHAN) to improve Brain CT report generation by mining common and rare pathologies in Brain CT imaging. Specifically, the proposed CRHAN follows a general encoder-decoder framework with two novel attention modules. In the encoder, a co-occurrence relationship guided semantic attention (CRSA) module is proposed to extract the critical semantic features by embedding the co-occurrence relationship of common pathologies into semantic attention. In the decoder, a common-rare topic driven visual attention (CRVA) module is proposed to fuse the common and rare semantic features as sentence topic vectors, and then guide the visual attention to capture important lesion features for medical report generation. Experiments on the Brain CT dataset demonstrate the effectiveness of the proposed method.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"3643-3653"},"PeriodicalIF":5.3,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142376770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse Graph Tensor Learning for Multi-View Spectral Clustering 用于多视图光谱聚类的稀疏图张量学习
IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-12 DOI: 10.1109/TETCI.2024.3409724
Man-Sheng Chen;Zhi-Yuan Li;Jia-Qi Lin;Chang-Dong Wang;Dong Huang
Multi-view spectral clustering has achieved impressive performance by learning multiple robust and meaningful similarity graphs for clustering. Generally, the existing literatures often construct multiple similarity graphs by certain similarity measure (e.g. the Euclidean distance), which lack the desired ability to learn sparse and reliable connections that carry critical information in graph learning while preserving the low-rank structure. Regarding the challenges, a novel Sparse Graph Tensor Learning for Multi-view Spectral Clustering (SGTL) method is designed in this paper, where multiple similarity graphs are seamlessly coupled with the cluster indicators and constrained with a low-rank graph tensor. Specifically, a novel graph learning paradigm is designed by establishing an explicit theoretical connection between the similarity matrices and the cluster indicator matrices, in order that the constructed similarity graphs enjoy the desired block diagonal and sparse property for learning a small portion of reliable links. Then, we stack multiple similarity matrices into a low-rank graph tensor to better preserve the low-rank structure of the reliable links in graph learning, where the key knowledge conveyed by singular values from different views is explicitly considered. Extensive experiments on several benchmark datasets demonstrate the superiority of SGTL.
多视角光谱聚类通过学习多个稳健而有意义的相似性图进行聚类,取得了令人瞩目的性能。一般来说,现有文献通常通过某种相似性度量(如欧氏距离)来构建多个相似性图,而这种方法缺乏学习稀疏可靠连接的能力,而这些连接在图学习中承载着关键信息,同时又保留了低秩结构。针对这些挑战,本文设计了一种新颖的稀疏图张量学习多视角光谱聚类(SGTL)方法,在该方法中,多个相似性图与聚类指标无缝耦合,并受低秩图张量约束。具体来说,本文通过在相似性矩阵和聚类指标矩阵之间建立明确的理论联系,设计了一种新颖的图学习范式,从而使构建的相似性图在学习小部分可靠链接时具有所需的对角分块和稀疏特性。然后,我们将多个相似性矩阵堆叠成一个低秩图张量,以便在图学习中更好地保留可靠链接的低秩结构,其中明确考虑了来自不同视图的奇异值所传达的关键知识。在多个基准数据集上的广泛实验证明了 SGTL 的优越性。
{"title":"Sparse Graph Tensor Learning for Multi-View Spectral Clustering","authors":"Man-Sheng Chen;Zhi-Yuan Li;Jia-Qi Lin;Chang-Dong Wang;Dong Huang","doi":"10.1109/TETCI.2024.3409724","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3409724","url":null,"abstract":"Multi-view spectral clustering has achieved impressive performance by learning multiple robust and meaningful similarity graphs for clustering. Generally, the existing literatures often construct multiple similarity graphs by certain similarity measure (e.g. the Euclidean distance), which lack the desired ability to learn sparse and reliable connections that carry critical information in graph learning while preserving the low-rank structure. Regarding the challenges, a novel Sparse Graph Tensor Learning for Multi-view Spectral Clustering (SGTL) method is designed in this paper, where multiple similarity graphs are seamlessly coupled with the cluster indicators and constrained with a low-rank graph tensor. Specifically, a novel graph learning paradigm is designed by establishing an explicit theoretical connection between the similarity matrices and the cluster indicator matrices, in order that the constructed similarity graphs enjoy the desired block diagonal and sparse property for learning a small portion of reliable links. Then, we stack multiple similarity matrices into a low-rank graph tensor to better preserve the low-rank structure of the reliable links in graph learning, where the key knowledge conveyed by singular values from different views is explicitly considered. Extensive experiments on several benchmark datasets demonstrate the superiority of SGTL.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"3534-3543"},"PeriodicalIF":5.3,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142377139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Bi-Search Evolutionary Algorithm for High-Dimensional Bi-Objective Feature Selection 用于高维双目标特征选择的双搜索进化算法
IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-30 DOI: 10.1109/TETCI.2024.3393388
Hang Xu;Bing Xue;Mengjie Zhang
High dimensionality often challenges the efficiency and accuracy of a classifier, while evolutionary feature selection is an effective method for data preprocessing and dimensionality reduction. However, with the exponential expansion of search space along with the increase of features, traditional evolutionary feature selection methods could still find it difficult to search for optimal or near optimal solutions in the large-scale search space. To overcome the above issue, in this paper, we propose a bi-search evolutionary algorithm (termed BSEA) for tackling high-dimensional feature selection in classification, with two contradictory optimizing objectives (i.e., minimizing both selected features and classification errors). In BSEA, a bi-search evolutionary mode combining the forward and backward searching tasks is adopted to enhance the search ability in the large-scale search space; in addition, an adaptive feature analysis mechanism is also designed to the explore promising features for efficiently reproducing more diverse offspring. In the experiments, BSEA is comprehensively compared with 9 most recent or classic state-of-the-art MOEAs on a series of 11 high-dimensional datasets with no less than 2000 features. The empirical results suggest that BSEA generally performs the best on most of the datasets in terms of all performance metrics, along with high computational efficiency, while each of its essential components can take positive effect on boosting the search ability and together make the best contribution.
高维度往往对分类器的效率和准确性提出挑战,而进化特征选择是一种有效的数据预处理和降维方法。然而,随着特征的增加,搜索空间也呈指数级扩展,传统的进化特征选择方法仍然难以在大规模搜索空间中搜索到最优或接近最优的解。为了克服上述问题,本文提出了一种双搜索进化算法(称为 BSEA),用于解决分类中的高维特征选择问题,该算法具有两个相互矛盾的优化目标(即同时最小化所选特征和分类误差)。BSEA 采用前向搜索和后向搜索相结合的双搜索进化模式,以增强在大规模搜索空间中的搜索能力;此外,还设计了自适应特征分析机制,以发掘有潜力的特征,从而有效地繁殖出更多样化的后代。在实验中,BSEA 与 9 种最新或最经典的 MOEA 在 11 个高维数据集上进行了综合比较,这些数据集包含不少于 2000 个特征。实证结果表明,在大多数数据集上,BSEA 在所有性能指标方面的表现都是最好的,同时计算效率也很高。
{"title":"A Bi-Search Evolutionary Algorithm for High-Dimensional Bi-Objective Feature Selection","authors":"Hang Xu;Bing Xue;Mengjie Zhang","doi":"10.1109/TETCI.2024.3393388","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3393388","url":null,"abstract":"High dimensionality often challenges the efficiency and accuracy of a classifier, while evolutionary feature selection is an effective method for data preprocessing and dimensionality reduction. However, with the exponential expansion of search space along with the increase of features, traditional evolutionary feature selection methods could still find it difficult to search for optimal or near optimal solutions in the large-scale search space. To overcome the above issue, in this paper, we propose a bi-search evolutionary algorithm (termed BSEA) for tackling high-dimensional feature selection in classification, with two contradictory optimizing objectives (i.e., minimizing both selected features and classification errors). In BSEA, a bi-search evolutionary mode combining the forward and backward searching tasks is adopted to enhance the search ability in the large-scale search space; in addition, an adaptive feature analysis mechanism is also designed to the explore promising features for efficiently reproducing more diverse offspring. In the experiments, BSEA is comprehensively compared with 9 most recent or classic state-of-the-art MOEAs on a series of 11 high-dimensional datasets with no less than 2000 features. The empirical results suggest that BSEA generally performs the best on most of the datasets in terms of all performance metrics, along with high computational efficiency, while each of its essential components can take positive effect on boosting the search ability and together make the best contribution.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"3489-3502"},"PeriodicalIF":5.3,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142376998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-modal Authentication Model for Occluded Faces in a Challenging Environment 挑战性环境中的隐蔽人脸多模式身份验证模型
IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-30 DOI: 10.1109/TETCI.2024.3390058
Dahye Jeong;Eunbeen Choi;Hyeongjin Ahn;Ester Martinez-Martin;Eunil Park;Angel P. del Pobil
Authentication systems are crucial in the digital era, providing reliable protection of personal information. Most authentication systems rely on a single modality, such as the face, fingerprints, or password sensors. In the case of an authentication system based on a single modality, there is a problem in that the performance of the authentication is degraded when the information of the corresponding modality is covered. Especially, face identification does not work well due to the mask in a COVID-19 situation. In this paper, we focus on the multi-modality approach to improve the performance of occluded face identification. Multi-modal authentication systems are crucial in building a robust authentication system because they can compensate for the lack of modality in the uni-modal authentication system. In this light, we propose DemoID, a multi-modal authentication system based on face and voice for human identification in a challenging environment. Moreover, we build a demographic module to efficiently handle the demographic information of individual faces. The experimental results showed an accuracy of 99% when using all modalities and an overall improvement of 5.41%–10.77% relative to uni-modal face models. Furthermore, our model demonstrated the highest performance compared to existing multi-modal models and also showed promising results on the real-world dataset constructed for this study.
身份验证系统在数字时代至关重要,可为个人信息提供可靠的保护。大多数身份验证系统依赖于单一模式,如人脸、指纹或密码传感器。基于单一模式的身份验证系统存在一个问题,即当相应模式的信息被覆盖时,身份验证的性能就会下降。特别是在 COVID-19 情况下,由于面具的遮挡,人脸识别效果不佳。在本文中,我们将重点研究如何利用多模态方法来提高遮挡人脸识别的性能。多模态身份验证系统可以弥补单模态身份验证系统在模态方面的不足,因此对于建立一个强大的身份验证系统至关重要。有鉴于此,我们提出了 DemoID,一种基于人脸和声音的多模态身份验证系统,用于在具有挑战性的环境中进行人脸识别。此外,我们还建立了一个人口统计模块,以有效处理个人面孔的人口统计信息。实验结果表明,在使用所有模态时,准确率达到 99%,与单模态人脸模型相比,整体提高了 5.41%-10.77%。此外,与现有的多模态模型相比,我们的模型表现出了最高的性能,而且在为本研究构建的真实世界数据集上也显示出了良好的效果。
{"title":"Multi-modal Authentication Model for Occluded Faces in a Challenging Environment","authors":"Dahye Jeong;Eunbeen Choi;Hyeongjin Ahn;Ester Martinez-Martin;Eunil Park;Angel P. del Pobil","doi":"10.1109/TETCI.2024.3390058","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3390058","url":null,"abstract":"Authentication systems are crucial in the digital era, providing reliable protection of personal information. Most authentication systems rely on a single modality, such as the face, fingerprints, or password sensors. In the case of an authentication system based on a single modality, there is a problem in that the performance of the authentication is degraded when the information of the corresponding modality is covered. Especially, face identification does not work well due to the mask in a COVID-19 situation. In this paper, we focus on the multi-modality approach to improve the performance of occluded face identification. Multi-modal authentication systems are crucial in building a robust authentication system because they can compensate for the lack of modality in the uni-modal authentication system. In this light, we propose DemoID, a multi-modal authentication system based on face and voice for human identification in a challenging environment. Moreover, we build a demographic module to efficiently handle the demographic information of individual faces. The experimental results showed an accuracy of 99% when using all modalities and an overall improvement of 5.41%–10.77% relative to uni-modal face models. Furthermore, our model demonstrated the highest performance compared to existing multi-modal models and also showed promising results on the real-world dataset constructed for this study.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"3463-3473"},"PeriodicalIF":5.3,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142368449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PV-SSD: A Multi-Modal Point Cloud 3D Object Detector Based on Projection Features and Voxel Features PV-SSD:基于投影特征和体素特征的多模态点云三维物体检测器
IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-29 DOI: 10.1109/TETCI.2024.3389710
Yongxin Shao;Aihong Tan;Zhetao Sun;Enhui Zheng;Tianhong Yan;Peng Liao
3D object detection using LiDAR is critical for autonomous driving. However, the point cloud data in autonomous driving scenarios is sparse. Converting the sparse point cloud into regular data representations (voxels or projection) often leads to information loss due to downsampling or excessive compression of feature information. This kind of information loss will adversely affect detection accuracy, especially for objects with fewer reflective points like cyclists. This paper proposes a multi-modal point cloud 3D object detector based on projection features and voxel features, which consists of two branches. One, called the voxel branch, is used to extract fine-grained local features. Another, called the projection branch, is used to extract projection features from a bird's-eye view and focus on the correlation of local features in the voxel branch. By feeding voxel features into the projection branch, we can compensate for the information loss in the projection branch while focusing on the correlation between neighboring local features in the voxel features. To achieve comprehensive feature fusion of voxel features and projection features, we propose a multi-modal feature fusion module (MSSFA). To further mitigate the loss of crucial features caused by downsampling, we propose a voxel feature extraction method (VR-VFE), which samples feature points based on their importance for the detection task. To validate the effectiveness of our method, we tested it on the KITTI dataset and ONCE dataset. The experimental results show that our method has achieved significant improvement in the detection accuracy of objects with fewer reflection points like cyclists.
使用激光雷达进行 3D 物体检测对自动驾驶至关重要。然而,自动驾驶场景中的点云数据是稀疏的。将稀疏点云转换为常规数据表示(体素或投影)通常会导致信息丢失,原因是对特征信息进行了下采样或过度压缩。这种信息损失会对检测精度产生不利影响,特别是对于像骑自行车者这样反射点较少的物体。本文提出了一种基于投影特征和体素特征的多模态点云三维物体检测器,它由两个分支组成。一个分支称为体素分支,用于提取细粒度的局部特征。另一个分支称为投影分支,用于从鸟瞰图中提取投影特征,并关注体素分支中局部特征的相关性。通过将体素特征输入投影分支,我们可以弥补投影分支的信息损失,同时关注体素特征中相邻局部特征之间的相关性。为了实现体素特征和投影特征的全面特征融合,我们提出了多模态特征融合模块(MSSFA)。为了进一步减少下采样造成的关键特征损失,我们提出了一种体素特征提取方法(VR-VFE),该方法根据特征点对检测任务的重要性对其进行采样。为了验证我们方法的有效性,我们在 KITTI 数据集和 ONCE 数据集上进行了测试。实验结果表明,我们的方法显著提高了对自行车等反射点较少的物体的检测精度。
{"title":"PV-SSD: A Multi-Modal Point Cloud 3D Object Detector Based on Projection Features and Voxel Features","authors":"Yongxin Shao;Aihong Tan;Zhetao Sun;Enhui Zheng;Tianhong Yan;Peng Liao","doi":"10.1109/TETCI.2024.3389710","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3389710","url":null,"abstract":"3D object detection using LiDAR is critical for autonomous driving. However, the point cloud data in autonomous driving scenarios is sparse. Converting the sparse point cloud into regular data representations (voxels or projection) often leads to information loss due to downsampling or excessive compression of feature information. This kind of information loss will adversely affect detection accuracy, especially for objects with fewer reflective points like cyclists. This paper proposes a multi-modal point cloud 3D object detector based on projection features and voxel features, which consists of two branches. One, called the voxel branch, is used to extract fine-grained local features. Another, called the projection branch, is used to extract projection features from a bird's-eye view and focus on the correlation of local features in the voxel branch. By feeding voxel features into the projection branch, we can compensate for the information loss in the projection branch while focusing on the correlation between neighboring local features in the voxel features. To achieve comprehensive feature fusion of voxel features and projection features, we propose a multi-modal feature fusion module (MSSFA). To further mitigate the loss of crucial features caused by downsampling, we propose a voxel feature extraction method (VR-VFE), which samples feature points based on their importance for the detection task. To validate the effectiveness of our method, we tested it on the KITTI dataset and ONCE dataset. The experimental results show that our method has achieved significant improvement in the detection accuracy of objects with fewer reflection points like cyclists.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"3436-3449"},"PeriodicalIF":5.3,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142368263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalized Population-Based Training for Hyperparameter Optimization in Reinforcement Learning 强化学习中基于群体的广义超参数优化训练
IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-26 DOI: 10.1109/TETCI.2024.3389777
Hui Bai;Ran Cheng
Hyperparameter optimization plays a key role in the machine learning domain. Its significance is especially pronounced in reinforcement learning (RL), where agents continuously interact with and adapt to their environments, requiring dynamic adjustments in their learning trajectories. To cater to this dynamicity, the Population-Based Training (PBT) was introduced, leveraging the collective intelligence of a population of agents learning simultaneously. However, PBT tends to favor high-performing agents, potentially neglecting the explorative potential of agents on the brink of significant advancements. To mitigate the limitations of PBT, we present the Generalized Population-Based Training (GPBT), a refined framework designed for enhanced granularity and flexibility in hyperparameter adaptation. Complementing GPBT, we further introduce Pairwise Learning (PL). Instead of merely focusing on elite agents, PL employs a comprehensive pairwise strategy to identify performance differentials and provide holistic guidance to underperforming agents. By integrating the capabilities of GPBT and PL, our approach significantly improves upon traditional PBT in terms of adaptability and computational efficiency. Rigorous empirical evaluations across a range of RL benchmarks confirm that our approach consistently outperforms not only the conventional PBT but also its Bayesian-optimized variant.
超参数优化在机器学习领域发挥着关键作用。在强化学习(RL)中,超参数优化的意义尤为突出,因为在强化学习中,代理不断与环境互动并适应环境,需要动态调整其学习轨迹。为了迎合这种动态性,人们引入了基于群体的训练(PBT),利用群体代理同时学习的集体智慧。然而,PBT 往往偏向于表现优异的代理,可能会忽视处于重大进步边缘的代理的探索潜力。为了缓解 PBT 的局限性,我们提出了广义基于群体的训练(GPBT),这是一个经过改进的框架,旨在提高超参数适应的粒度和灵活性。作为对 GPBT 的补充,我们进一步引入了配对学习 (PL)。PL 采用全面的配对策略来识别性能差异,并为表现不佳的代理提供整体指导,而不是仅仅关注精英代理。通过整合 GPBT 和 PL 的能力,我们的方法在适应性和计算效率方面显著提高了传统 PBT 的水平。在一系列 RL 基准上进行的严格经验评估证实,我们的方法不仅始终优于传统的 PBT,而且还优于其贝叶斯优化变体。
{"title":"Generalized Population-Based Training for Hyperparameter Optimization in Reinforcement Learning","authors":"Hui Bai;Ran Cheng","doi":"10.1109/TETCI.2024.3389777","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3389777","url":null,"abstract":"Hyperparameter optimization plays a key role in the machine learning domain. Its significance is especially pronounced in reinforcement learning (RL), where agents continuously interact with and adapt to their environments, requiring dynamic adjustments in their learning trajectories. To cater to this dynamicity, the Population-Based Training (PBT) was introduced, leveraging the collective intelligence of a population of agents learning simultaneously. However, PBT tends to favor high-performing agents, potentially neglecting the explorative potential of agents on the brink of significant advancements. To mitigate the limitations of PBT, we present the Generalized Population-Based Training (GPBT), a refined framework designed for enhanced granularity and flexibility in hyperparameter adaptation. Complementing GPBT, we further introduce Pairwise Learning (PL). Instead of merely focusing on elite agents, PL employs a comprehensive pairwise strategy to identify performance differentials and provide holistic guidance to underperforming agents. By integrating the capabilities of GPBT and PL, our approach significantly improves upon traditional PBT in terms of adaptability and computational efficiency. Rigorous empirical evaluations across a range of RL benchmarks confirm that our approach consistently outperforms not only the conventional PBT but also its Bayesian-optimized variant.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"3450-3462"},"PeriodicalIF":5.3,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142368264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Multi-Source Information Fusion Method Based on Dependency Interval 基于依赖区间的新型多源信息融合方法
IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-25 DOI: 10.1109/TETCI.2024.3370032
Weihua Xu;Yufei Lin;Na Wang
With the rapid development of Big Data era, it is necessary to extract necessary information from a large amount of information. Single-source information systems are often affected by extreme values and outliers, so multi-source information systems are more common and data more reasonable, information fusion is a common method to deal with multi-source information system. Compared with single-valued data, interval-valued data can describe the uncertainty and random change of data more effectively. This article proposes a novel interval-valued multi-source information fusion method: A multi-source information fusion method based on dependency interval. This method needs to construct a dependency function, which takes into account the interval length and the number of data points in the interval, so as to make the obtained data more centralized and eliminate the influence of outliers and extreme values. Due to the unfixed boundary of the dependency interval, a median point within the interval is selected as a bridge to simplify the acquisition of the dependency interval. Furthermore, a multi-source information system fusion algorithm based on dependency intervals was proposed, and experiments were conducted on 9 UCI datasets to compare the classification accuracy and quality of the proposed algorithm with traditional information fusion methods. The experimental results show that this method is more effective than the maximum interval method, quartile interval method, and mean interval method, and the validity of the data has been proven through hypothesis testing.
随着大数据时代的快速发展,有必要从海量信息中提取必要的信息。单源信息系统往往会受到极端值和离群值的影响,因此多源信息系统更加常见,数据更加合理,信息融合是处理多源信息系统的常用方法。与单值数据相比,区间值数据能更有效地描述数据的不确定性和随机变化。本文提出了一种新颖的区间值多源信息融合方法:一种基于依赖区间的多源信息融合方法。该方法需要构建一个隶属函数,该函数考虑了区间长度和区间内数据点的数量,从而使得到的数据更加集中,消除了异常值和极端值的影响。由于隶属区间的边界不固定,因此选择区间内的中值点作为桥梁,以简化隶属区间的获取。此外,还提出了一种基于依赖区间的多源信息系统融合算法,并在 9 个 UCI 数据集上进行了实验,比较了所提算法与传统信息融合方法的分类精度和质量。实验结果表明,该方法比最大值区间法、四分位区间法和平均值区间法更有效,并通过假设检验证明了数据的有效性。
{"title":"A Novel Multi-Source Information Fusion Method Based on Dependency Interval","authors":"Weihua Xu;Yufei Lin;Na Wang","doi":"10.1109/TETCI.2024.3370032","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3370032","url":null,"abstract":"With the rapid development of Big Data era, it is necessary to extract necessary information from a large amount of information. Single-source information systems are often affected by extreme values and outliers, so multi-source information systems are more common and data more reasonable, information fusion is a common method to deal with multi-source information system. Compared with single-valued data, interval-valued data can describe the uncertainty and random change of data more effectively. This article proposes a novel interval-valued multi-source information fusion method: A multi-source information fusion method based on dependency interval. This method needs to construct a dependency function, which takes into account the interval length and the number of data points in the interval, so as to make the obtained data more centralized and eliminate the influence of outliers and extreme values. Due to the unfixed boundary of the dependency interval, a median point within the interval is selected as a bridge to simplify the acquisition of the dependency interval. Furthermore, a multi-source information system fusion algorithm based on dependency intervals was proposed, and experiments were conducted on 9 UCI datasets to compare the classification accuracy and quality of the proposed algorithm with traditional information fusion methods. The experimental results show that this method is more effective than the maximum interval method, quartile interval method, and mean interval method, and the validity of the data has been proven through hypothesis testing.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 4","pages":"3180-3194"},"PeriodicalIF":5.3,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-Contrast Medical Image Segmentation via Transformer and Boundary Perception 通过变换器和边界感知进行低对比医学图像分割
IF 5.3 3区 计算机科学 Q1 Mathematics Pub Date : 2024-04-19 DOI: 10.1109/TETCI.2024.3353624
Yinglin Zhang;Ruiling Xi;Wei Wang;Heng Li;Lingxi Hu;Huiyan Lin;Dave Towey;Ruibin Bai;Huazhu Fu;Risa Higashita;Jiang Liu
Low-contrast medical image segmentation is a challenging task that requires full use of local details and global context. However, existing convolutional neural networks (CNNs) cannot fully exploit global information due to limited receptive fields and local weight sharing. On the other hand, the transformer effectively establishes long-range dependencies but lacks desirable properties for modeling local details. This paper proposes a Transformer-embedded Boundary perception Network (TBNet) that combines the advantages of transformer and convolution for low-contrast medical image segmentation. Firstly, the transformer-embedded module uses convolution at the low-level layer to model local details and uses the Enhanced TRansformer (ETR) to capture long-range dependencies at the high-level layer. This module can extract robust features with semantic contexts to infer the possible target location and basic structure in low-contrast conditions. Secondly, we utilize the decoupled body-edge branch to promote general feature learning and precept precise boundary locations. The ETR establishes long-range dependencies across the whole feature map range and is enhanced by introducing local information. We implement it in a parallel mode, i.e., the group of self-attention with multi-head captures the global relationship, and the group of convolution retains local details. We compare TBNet with other state-of-the-art (SOTA) methods on the cornea endothelial cell, ciliary body, and kidney segmentation tasks. The TBNet improves segmentation performance, proving its effectiveness and robustness.
低对比度医学图像分割是一项具有挑战性的任务,需要充分利用局部细节和全局背景。然而,现有的卷积神经网络(CNN)由于感受野和局部权重共享有限,无法充分利用全局信息。另一方面,变换器能有效地建立长程依赖关系,但缺乏对局部细节建模的理想特性。本文提出的变换器嵌入边界感知网络(TBNet)结合了变换器和卷积的优势,可用于低对比度医学图像分割。首先,变换器嵌入模块在低层使用卷积来模拟局部细节,在高层使用增强变换器(ETR)来捕捉长距离依赖关系。该模块可以提取具有语义背景的稳健特征,从而在低对比度条件下推断出可能的目标位置和基本结构。其次,我们利用解耦体边缘分支来促进一般特征学习,并预设精确的边界位置。ETR 在整个特征图范围内建立了长程依赖关系,并通过引入局部信息得到增强。我们以并行模式实现它,即多头自注意组捕捉全局关系,卷积组保留局部细节。在角膜内皮细胞、睫状体和肾脏的分割任务中,我们将 TBNet 与其他最先进的(SOTA)方法进行了比较。TBNet 提高了分割性能,证明了它的有效性和鲁棒性。
{"title":"Low-Contrast Medical Image Segmentation via Transformer and Boundary Perception","authors":"Yinglin Zhang;Ruiling Xi;Wei Wang;Heng Li;Lingxi Hu;Huiyan Lin;Dave Towey;Ruibin Bai;Huazhu Fu;Risa Higashita;Jiang Liu","doi":"10.1109/TETCI.2024.3353624","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3353624","url":null,"abstract":"Low-contrast medical image segmentation is a challenging task that requires full use of local details and global context. However, existing convolutional neural networks (CNNs) cannot fully exploit global information due to limited receptive fields and local weight sharing. On the other hand, the transformer effectively establishes long-range dependencies but lacks desirable properties for modeling local details. This paper proposes a Transformer-embedded Boundary perception Network (TBNet) that combines the advantages of transformer and convolution for low-contrast medical image segmentation. Firstly, the transformer-embedded module uses convolution at the low-level layer to model local details and uses the Enhanced TRansformer (ETR) to capture long-range dependencies at the high-level layer. This module can extract robust features with semantic contexts to infer the possible target location and basic structure in low-contrast conditions. Secondly, we utilize the decoupled body-edge branch to promote general feature learning and precept precise boundary locations. The ETR establishes long-range dependencies across the whole feature map range and is enhanced by introducing local information. We implement it in a parallel mode, i.e., the group of self-attention with multi-head captures the global relationship, and the group of convolution retains local details. We compare TBNet with other state-of-the-art (SOTA) methods on the cornea endothelial cell, ciliary body, and kidney segmentation tasks. The TBNet improves segmentation performance, proving its effectiveness and robustness.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 3","pages":"2297-2309"},"PeriodicalIF":5.3,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141094882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compensation Atmospheric Scattering Model and Two-Branch Network for Single Image Dehazing 用于单幅图像去噪的补偿大气散射模型和双分支网络
IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-18 DOI: 10.1109/TETCI.2024.3386838
Xudong Wang;Xi'ai Chen;Weihong Ren;Zhi Han;Huijie Fan;Yandong Tang;Lianqing Liu
Most existing dehazing networks rely on synthetic hazy-clear image pairs for training, and thus fail to work well in real-world scenes. In this paper, we deduce a reformulated atmospheric scattering model for a hazy image and propose a novel lightweight two-branch dehazing network. In the model, we use a Transformation Map to represent the dehazing transformation and use a Compensation Map to represent variable illumination compensation. Based on this model, we design a Two-Branch Network (TBN) to jointly estimate the Transformation Map and Compensation Map. Our TBN is designed with a shared Feature Extraction Module and two Adaptive Weight Modules. The Feature Extraction Module is used to extract shared features from hazy images. The two Adaptive Weight Modules generate two groups of adaptive weighted features for the Transformation Map and Compensation Map, respectively. This design allows for a targeted conversion of features to the Transformation Map and Compensation Map. To further improve the dehazing performance in the real-world, we propose a semi-supervised learning strategy for TBN. Specifically, by performing supervised pre-training based on synthetic image pairs, we propose a Self-Enhancement method to generate pseudo-labels, and then further train our TBN with the pseudo-labels in a semi-supervised way. Extensive experiments demonstrate that the model-based TBN outperforms the state-of-the-art methods on various real-world datasets.
现有的去噪网络大多依赖于合成的灰度-清晰度图像对进行训练,因此在真实世界场景中效果不佳。在本文中,我们为雾霾图像推导了一个重新制定的大气散射模型,并提出了一种新型轻量级双分支去雾霾网络。在该模型中,我们使用 "变换图"(Transformation Map)表示去雾变换,使用 "补偿图"(Compensation Map)表示可变光照补偿。基于这个模型,我们设计了一个双分支网络(TBN)来联合估计变换图和补偿图。我们的 TBN 设计有一个共享的特征提取模块和两个自适应权重模块。特征提取模块用于从雾霾图像中提取共享特征。两个自适应加权模块分别为变换图和补偿图生成两组自适应加权特征。这种设计可以有针对性地将特征转换到变换贴图和补偿贴图。为了进一步提高实际应用中的去毛刺性能,我们为 TBN 提出了一种半监督学习策略。具体来说,通过基于合成图像对进行有监督的预训练,我们提出了一种自我增强方法来生成伪标签,然后利用伪标签以半监督的方式进一步训练我们的 TBN。大量实验证明,基于模型的 TBN 在各种实际数据集上的表现优于最先进的方法。
{"title":"Compensation Atmospheric Scattering Model and Two-Branch Network for Single Image Dehazing","authors":"Xudong Wang;Xi'ai Chen;Weihong Ren;Zhi Han;Huijie Fan;Yandong Tang;Lianqing Liu","doi":"10.1109/TETCI.2024.3386838","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3386838","url":null,"abstract":"Most existing dehazing networks rely on synthetic hazy-clear image pairs for training, and thus fail to work well in real-world scenes. In this paper, we deduce a reformulated atmospheric scattering model for a hazy image and propose a novel lightweight two-branch dehazing network. In the model, we use a Transformation Map to represent the dehazing transformation and use a Compensation Map to represent variable illumination compensation. Based on this model, we design a \u0000<underline>T</u>\u0000wo-\u0000<underline>B</u>\u0000ranch \u0000<underline>N</u>\u0000etwork (TBN) to jointly estimate the Transformation Map and Compensation Map. Our TBN is designed with a shared Feature Extraction Module and two Adaptive Weight Modules. The Feature Extraction Module is used to extract shared features from hazy images. The two Adaptive Weight Modules generate two groups of adaptive weighted features for the Transformation Map and Compensation Map, respectively. This design allows for a targeted conversion of features to the Transformation Map and Compensation Map. To further improve the dehazing performance in the real-world, we propose a semi-supervised learning strategy for TBN. Specifically, by performing supervised pre-training based on synthetic image pairs, we propose a Self-Enhancement method to generate pseudo-labels, and then further train our TBN with the pseudo-labels in a semi-supervised way. Extensive experiments demonstrate that the model-based TBN outperforms the state-of-the-art methods on various real-world datasets.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 4","pages":"2880-2896"},"PeriodicalIF":5.3,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Contrastive Learning for Tracking Dynamic Communities in Temporal Networks 跟踪时态网络中动态群落的图对比学习
IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-17 DOI: 10.1109/TETCI.2024.3386844
Yun Ai;Xianghua Xie;Xiaoke Ma
Temporal networks are ubiquitous because complex systems in nature and society are evolving, and tracking dynamic communities is critical for revealing the mechanism of systems. Moreover, current algorithms utilize temporal smoothness framework to balance clustering accuracy at current time and clustering drift at historical time, which are criticized for failing to characterize the temporality of networks and determine its importance. To overcome these problems, we propose a novel algorithm by joining Non-negative matrix factorization and Contrastive learning for Dynamic Community detection (jNCDC). Specifically, jNCDC learns the features of vertices by projecting successive snapshots into a shared subspace to learn the low-dimensional representation of vertices with matrix factorization. Subsequently, it constructs an evolution graph to explicitly measure relations of vertices by representing vertices at current time with features at historical time, paving a way to characterize the dynamics of networks at the vertex-level. Finally, graph contrastive learning utilizes the roles of vertices to select positive and negative samples to further improve the quality of features. These procedures are seamlessly integrated into an overall objective function, and optimization rules are deduced. To the best of our knowledge, jNCDC is the first graph contrastive learning for dynamic community detection, that provides an alternative for the current temporal smoothness framework. Experimental results demonstrate that jNCDC is superior to the state-of-the-art approaches in terms of accuracy.
时态网络无处不在,因为自然界和社会中的复杂系统都在不断演化,追踪动态群落对于揭示系统机制至关重要。此外,目前的算法利用时间平滑性框架来平衡当前时间的聚类准确性和历史时间的聚类漂移,但这种方法因无法表征网络的时间性和确定其重要性而饱受诟病。为了克服这些问题,我们提出了一种将非负矩阵因式分解和动态聚类检测对比学习相结合的新算法(jNCDC)。具体来说,jNCDC 通过将连续快照投影到共享子空间来学习顶点的特征,从而利用矩阵因式分解学习顶点的低维表示。随后,它构建了一个演化图,通过用历史时间的特征表示当前时间的顶点,明确衡量顶点之间的关系,为在顶点层面描述网络的动态特性铺平了道路。最后,图对比学习利用顶点的作用来选择正样本和负样本,从而进一步提高特征的质量。这些程序被无缝集成到一个总体目标函数中,并推导出优化规则。据我们所知,jNCDC 是第一个用于动态群落检测的图对比学习方法,它为当前的时间平滑性框架提供了一个替代方案。实验结果表明,jNCDC 在准确性方面优于最先进的方法。
{"title":"Graph Contrastive Learning for Tracking Dynamic Communities in Temporal Networks","authors":"Yun Ai;Xianghua Xie;Xiaoke Ma","doi":"10.1109/TETCI.2024.3386844","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3386844","url":null,"abstract":"Temporal networks are ubiquitous because complex systems in nature and society are evolving, and tracking dynamic communities is critical for revealing the mechanism of systems. Moreover, current algorithms utilize temporal smoothness framework to balance clustering accuracy at current time and clustering drift at historical time, which are criticized for failing to characterize the temporality of networks and determine its importance. To overcome these problems, we propose a novel algorithm by \u0000<underline><b>j</b></u>\u0000oining \u0000<underline><b>N</b></u>\u0000on-negative matrix factorization and \u0000<underline><b>C</b></u>\u0000ontrastive learning for \u0000<underline><b>D</b></u>\u0000ynamic \u0000<underline><b>C</b></u>\u0000ommunity detection (jNCDC). Specifically, jNCDC learns the features of vertices by projecting successive snapshots into a shared subspace to learn the low-dimensional representation of vertices with matrix factorization. Subsequently, it constructs an evolution graph to explicitly measure relations of vertices by representing vertices at current time with features at historical time, paving a way to characterize the dynamics of networks at the vertex-level. Finally, graph contrastive learning utilizes the roles of vertices to select positive and negative samples to further improve the quality of features. These procedures are seamlessly integrated into an overall objective function, and optimization rules are deduced. To the best of our knowledge, jNCDC is the first graph contrastive learning for dynamic community detection, that provides an alternative for the current temporal smoothness framework. Experimental results demonstrate that jNCDC is superior to the state-of-the-art approaches in terms of accuracy.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"3422-3435"},"PeriodicalIF":5.3,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142368544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Emerging Topics in Computational Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1