Demand forecasting is crucial for the robust development of industrial chains, given the direct impact of consumer market volatility on production planning. However, in the intricate industrial chain environment, limited accessible data from independent production entities poses challenges in achieving high performances and precise predictions for future demand. Centralized training using machine learning modeling on data from multiple production entities is a potential solution, yet issues like consumer privacy, industry competition, and data security hinder practical machine learning implementation. This research introduces an innovative distributed learning approach, utilizing privacy-preserving federated learning techniques to enhance time-series demand forecasting for multiple entities pertaining to industrial chains. Our approach involves several key steps, including federated learning among entities in the industrial chain on a blockchain platform, ensuring the trustworthiness of the computation process and results. Leveraging Pre-training Models (PTMs) facilitates federated fine-tuning among production entities, addressing model heterogeneity and minimizing privacy breach risks. A comprehensive comparison study on various federated learning demand forecasting models on data from two real-world industry chains demonstrates the superior performance and enhanced security of our developed approach.
{"title":"Trustworthy Federated Fine-Tuning for Industrial Chains Demand Forecasting","authors":"Guoquan Huang;Guanyu Lin;Li Ning;Yicheng Xu;Chee Peng Lim;Yong Zhang","doi":"10.1109/TETCI.2025.3537941","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3537941","url":null,"abstract":"Demand forecasting is crucial for the robust development of industrial chains, given the direct impact of consumer market volatility on production planning. However, in the intricate industrial chain environment, limited accessible data from independent production entities poses challenges in achieving high performances and precise predictions for future demand. Centralized training using machine learning modeling on data from multiple production entities is a potential solution, yet issues like consumer privacy, industry competition, and data security hinder practical machine learning implementation. This research introduces an innovative distributed learning approach, utilizing privacy-preserving federated learning techniques to enhance time-series demand forecasting for multiple entities pertaining to industrial chains. Our approach involves several key steps, including federated learning among entities in the industrial chain on a blockchain platform, ensuring the trustworthiness of the computation process and results. Leveraging Pre-training Models (PTMs) facilitates federated fine-tuning among production entities, addressing model heterogeneity and minimizing privacy breach risks. A comprehensive comparison study on various federated learning demand forecasting models on data from two real-world industry chains demonstrates the superior performance and enhanced security of our developed approach.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 2","pages":"1441-1453"},"PeriodicalIF":5.3,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143706688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-13DOI: 10.1109/TETCI.2025.3537936
Ge Li;Hanqing Sun;Aiping Yang;Jiale Cao;Yanwei Pang
Motion expressions guided video segmentation is aimed to segment objects in videos according to the given language descriptions about object motion. To accurately segment moving objects across frames, it is important to capture motion information of objects within the entire video. However, the existing method fails to encode object motion information accurately. In this paper, we propose an effective motion information mining framework to improve motion expressions guided video segmentation, named EMIM. It consists of two novel modules, including a hierarchical motion aggregation module and a box-level positional encoding module. Specifically, the hierarchical motion aggregation module is aimed to capture local and global temporal information of objects within a video. To achieve this goal, we introduce local-window self-attention and selective state space models for short-term and long-term feature aggregation. Inspired by that the spatial changes of objects can effectively reflect the object motion across frames, the box-level positional encoding module integrates object spatial information into object embeddings. With two proposed modules, our proposed method can capture object spatial changes with temporal evolution. We conduct the extensive experiments on motion expressions guided video segmentation dataset MeViS to reveal the advantages of our EMIM. Our proposed EMIM achieves a $ mathcal {J & F}$ score of 42.2%, outperforming the prior approach, LMPM, by 5.0%.
{"title":"Motion Expressions Guided Video Segmentation via Effective Motion Information Mining","authors":"Ge Li;Hanqing Sun;Aiping Yang;Jiale Cao;Yanwei Pang","doi":"10.1109/TETCI.2025.3537936","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3537936","url":null,"abstract":"Motion expressions guided video segmentation is aimed to segment objects in videos according to the given language descriptions about object motion. To accurately segment moving objects across frames, it is important to capture motion information of objects within the entire video. However, the existing method fails to encode object motion information accurately. In this paper, we propose an effective motion information mining framework to improve motion expressions guided video segmentation, named EMIM. It consists of two novel modules, including a hierarchical motion aggregation module and a box-level positional encoding module. Specifically, the hierarchical motion aggregation module is aimed to capture local and global temporal information of objects within a video. To achieve this goal, we introduce local-window self-attention and selective state space models for short-term and long-term feature aggregation. Inspired by that the spatial changes of objects can effectively reflect the object motion across frames, the box-level positional encoding module integrates object spatial information into object embeddings. With two proposed modules, our proposed method can capture object spatial changes with temporal evolution. We conduct the extensive experiments on motion expressions guided video segmentation dataset MeViS to reveal the advantages of our EMIM. Our proposed EMIM achieves a <inline-formula><tex-math>$ mathcal {J & F}$</tex-math></inline-formula> score of 42.2%, outperforming the prior approach, LMPM, by 5.0%.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 5","pages":"3712-3718"},"PeriodicalIF":5.3,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145141740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The escalating threat of adversarial attacks on deep learning models, particularly in security-critical fields, has highlighted the need for robust deep learning systems. Conventional evaluation methods of their robustness rely on adversarial accuracy, which measures the model performance under a specific perturbation intensity. However, this singular metric does not fully encapsulate the overall resilience of a model against varying degrees of perturbation. To address this issue, we propose a new metric termed as the adversarial hypervolume for assessing the robustness of deep learning models comprehensively over a range of perturbation intensities from a multi-objective optimization standpoint. This metric allows for an in-depth comparison of defense mechanisms and recognizes the trivial improvements in robustness brought by less potent defensive strategies. We adopt a novel training algorithm to enhance adversarial robustness uniformly across various perturbation intensities, instead of only optimizing adversarial accuracy. Our experiments validate the effectiveness of the adversarial hypervolume metric in robustness evaluation, demonstrating its ability to reveal subtle differences in robustness that adversarial accuracy overlooks.
{"title":"Exploring the Adversarial Frontier: Quantifying Robustness via Adversarial Hypervolume","authors":"Ping Guo;Cheng Gong;Xi Lin;Zhiyuan Yang;Qingfu Zhang","doi":"10.1109/TETCI.2025.3535656","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3535656","url":null,"abstract":"The escalating threat of adversarial attacks on deep learning models, particularly in security-critical fields, has highlighted the need for robust deep learning systems. Conventional evaluation methods of their robustness rely on adversarial accuracy, which measures the model performance under a specific perturbation intensity. However, this singular metric does not fully encapsulate the overall resilience of a model against varying degrees of perturbation. To address this issue, we propose a new metric termed as the adversarial hypervolume for assessing the robustness of deep learning models comprehensively over a range of perturbation intensities from a multi-objective optimization standpoint. This metric allows for an in-depth comparison of defense mechanisms and recognizes the trivial improvements in robustness brought by less potent defensive strategies. We adopt a novel training algorithm to enhance adversarial robustness uniformly across various perturbation intensities, instead of only optimizing adversarial accuracy. Our experiments validate the effectiveness of the adversarial hypervolume metric in robustness evaluation, demonstrating its ability to reveal subtle differences in robustness that adversarial accuracy overlooks.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 2","pages":"1367-1378"},"PeriodicalIF":5.3,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143716413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Knowledge distillation aims to transfer knowledge from a large teacher model to a lightweight student model, enabling the student to achieve performance comparable to the teacher. Existing methods explore various strategies for distillation, including soft logits, intermediate features, and even class-aware logits. Class-aware distillation, in particular, treats the columns of logit matrices as class representations, capturing potential relationships among instances within a batch. However, we argue that representing class embeddings solely as column vectors may not fully capture their inherent properties. In this study, we revisit class-aware knowledge distillation and propose that effective transfer of class-level knowledge requires two regularization strategies: separability and orthogonality. Additionally, we introduce an asymmetric architecture design to further enhance the transfer of class-level knowledge. Together, these components form a new methodology, Class Discriminative Knowledge Distillation (CD-KD). Empirical results demonstrate that CD-KD significantly outperforms several state-of-the-art logit-based and feature-based methods across diverse visual classification tasks, highlighting its effectiveness and robustness.
{"title":"Class Discriminative Knowledge Distillation","authors":"Shuoxi Zhang;Hanpeng Liu;Yuyi Wang;Kun He;Jun Lin;Yang Zeng","doi":"10.1109/TETCI.2025.3529896","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3529896","url":null,"abstract":"Knowledge distillation aims to transfer knowledge from a large teacher model to a lightweight student model, enabling the student to achieve performance comparable to the teacher. Existing methods explore various strategies for distillation, including soft logits, intermediate features, and even class-aware logits. Class-aware distillation, in particular, treats the columns of logit matrices as class representations, capturing potential relationships among instances within a batch. However, we argue that representing class embeddings solely as column vectors may not fully capture their inherent properties. In this study, we revisit class-aware knowledge distillation and propose that effective transfer of class-level knowledge requires two regularization strategies: <italic>separability</i> and <italic>orthogonality</i>. Additionally, we introduce an asymmetric architecture design to further enhance the transfer of class-level knowledge. Together, these components form a new methodology, Class Discriminative Knowledge Distillation (CD-KD). Empirical results demonstrate that CD-KD significantly outperforms several state-of-the-art logit-based and feature-based methods across diverse visual classification tasks, highlighting its effectiveness and robustness.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 2","pages":"1340-1351"},"PeriodicalIF":5.3,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143716404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Change detection (CD) is a crucial task in various real-world applications, aiming to identify regions of change between two images captured at different times. However, existing approaches mainly focus on designing advanced network architectures that map feature differences to change maps, overlooking the impact of feature difference quality. In this paper, we approach CD from a different perspective by exploring how to optimize feature differences to effectively highlight changes and suppress background regions. To achieve this, we propose a novel module called the iterative difference-enhanced transformers (IDET). IDET consists of three transformers: two for extracting long-range information from the bi-temporal images, and one for enhancing the feature difference. Unlike previous transformers, the third transformer utilizes the outputs of the first two transformers to guide iterative and dynamic enhancement of the feature difference. To further enhance refinement, we introduce the multi-scale IDET-based change detection approach, which utilizes multi-scale representations of the images to refine the feature difference at multiple scales. Additionally, we propose a coarse-to-fine fusion strategy to combine all refinements. Our final CD method surpasses nine state-of-the-art methods on six large-scale datasets across different application scenarios. This highlights the significance of feature difference enhancement and demonstrates the effectiveness of IDET. Furthermore, we demonstrate that our IDET can be seamlessly integrated into other existing CD methods, resulting in a substantial improvement in detection accuracy.
{"title":"IDET: Iterative Difference-Enhanced Transformers for High-Quality Change Detection","authors":"Qing Guo;Ruofei Wang;Rui Huang;Renjie Wan;Shuifa Sun;Yuxiang Zhang","doi":"10.1109/TETCI.2025.3529893","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3529893","url":null,"abstract":"Change detection (CD) is a crucial task in various real-world applications, aiming to identify regions of change between two images captured at different times. However, existing approaches mainly focus on designing advanced network architectures that map feature differences to change maps, overlooking the impact of feature difference quality. In this paper, we approach CD from a different perspective by exploring <italic>how to optimize feature differences to effectively highlight changes and suppress background regions</i>. To achieve this, we propose a novel module called the iterative difference-enhanced transformers (IDET). IDET consists of three transformers: two for extracting long-range information from the bi-temporal images, and one for enhancing the feature difference. Unlike previous transformers, the third transformer utilizes the outputs of the first two transformers to guide iterative and dynamic enhancement of the feature difference. To further enhance refinement, we introduce the multi-scale IDET-based change detection approach, which utilizes multi-scale representations of the images to refine the feature difference at multiple scales. Additionally, we propose a coarse-to-fine fusion strategy to combine all refinements. Our final CD method surpasses nine state-of-the-art methods on six large-scale datasets across different application scenarios. This highlights the significance of feature difference enhancement and demonstrates the effectiveness of IDET. Furthermore, we demonstrate that our IDET can be seamlessly integrated into other existing CD methods, resulting in a substantial improvement in detection accuracy.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 2","pages":"1093-1106"},"PeriodicalIF":5.3,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143716410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-28DOI: 10.1109/TETCI.2025.3529902
Xinglin Zhou;Yifu Yuan;Shaofu Yang;Jianye Hao
Hierarchical reinforcement learning (HRL) provides a promising solution for complex tasks with sparse rewards of agents, which uses a hierarchical framework that divides tasks into subgoals and completes them sequentially. However, current methods struggle to find suitable subgoals for ensuring a stable learning process. To address the issue, we propose a general hierarchical reinforcement learning framework incorporating human feedback and dynamic distance constraints, termed MENTOR, which acts as a “mentor”. Specifically, human feedback is incorporated into high-level policy learning to find better subgoals. Furthermore, we propose the Dynamic Distance Constraint (DDC) mechanism dynamically adjusting the space of optional subgoals, such that MENTOR can generate subgoals matching the low-level policy learning process from easy to hard. As a result, the learning efficiency can be improved. As for low-level policy, a dual policy is designed for exploration-exploitation decoupling to stabilize the training process. Extensive experiments demonstrate that MENTOR uses a small amount of human feedback to achieve significant improvement in complex tasks with sparse rewards.
{"title":"MENTOR: Guiding Hierarchical Reinforcement Learning With Human Feedback and Dynamic Distance Constraint","authors":"Xinglin Zhou;Yifu Yuan;Shaofu Yang;Jianye Hao","doi":"10.1109/TETCI.2025.3529902","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3529902","url":null,"abstract":"Hierarchical reinforcement learning (HRL) provides a promising solution for complex tasks with sparse rewards of agents, which uses a hierarchical framework that divides tasks into subgoals and completes them sequentially. However, current methods struggle to find suitable subgoals for ensuring a stable learning process. To address the issue, we propose a general hierarchical reinforcement learning framework incorporating human feedback and dynamic distance constraints, termed <bold>MENTOR</b>, which acts as a “<italic>mentor</i>”. Specifically, human feedback is incorporated into high-level policy learning to find better subgoals. Furthermore, we propose the Dynamic Distance Constraint (DDC) mechanism dynamically adjusting the space of optional subgoals, such that MENTOR can generate subgoals matching the low-level policy learning process from easy to hard. As a result, the learning efficiency can be improved. As for low-level policy, a dual policy is designed for exploration-exploitation decoupling to stabilize the training process. Extensive experiments demonstrate that MENTOR uses a small amount of human feedback to achieve significant improvement in complex tasks with sparse rewards.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 2","pages":"1292-1306"},"PeriodicalIF":5.3,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143716412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-27DOI: 10.1109/TETCI.2025.3529840
Emrah Hancer;Bing Xue;Mengjie Zhang
Multi-label classification (MLC) is crucial as it allows for a more nuanced and realistic representation of complex real-world scenarios, where instances may belong to multiple categories simultaneously, providing a comprehensive understanding of the data. Effective feature selection in MLC is paramount as it cannot only enhance model efficiency and interpretability but also mitigate the curse of dimensionality, ensuring more accurate and streamlined predictions for complex, multi-label data. Despite the proven efficacy of evolutionary computation (EC) techniques in enhancing feature selection for multi-label datasets, research on feature selection in MLC remains sparse in the domain of multi- and many-objective optimization. This paper proposes a many-objective differential evolution algorithm called MODivDE for feature selection in high-dimensional MLC tasks. The MODivDE algorithm involves multiple improvements and innovations in quality indicator-based selection, logic-based search strategy, and diversity-based archive update. The results demonstrate the exceptional performance of the MODivDE algorithm across a diverse range of high-dimensional datasets, surpassing recently introduced many-objective and conventional multi-label feature selection algorithms. The advancements in MODivDE collectively contribute to significantly improved accuracy, efficiency, and interpretability compared to state-of-the-art methods in the realm of multi-label feature selection.
{"title":"A Many-Objective Diversity-Guided Differential Evolution Algorithm for Multi-Label Feature Selection in High-Dimensional Datasets","authors":"Emrah Hancer;Bing Xue;Mengjie Zhang","doi":"10.1109/TETCI.2025.3529840","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3529840","url":null,"abstract":"Multi-label classification (MLC) is crucial as it allows for a more nuanced and realistic representation of complex real-world scenarios, where instances may belong to multiple categories simultaneously, providing a comprehensive understanding of the data. Effective feature selection in MLC is paramount as it cannot only enhance model efficiency and interpretability but also mitigate the curse of dimensionality, ensuring more accurate and streamlined predictions for complex, multi-label data. Despite the proven efficacy of evolutionary computation (EC) techniques in enhancing feature selection for multi-label datasets, research on feature selection in MLC remains sparse in the domain of multi- and many-objective optimization. This paper proposes a many-objective differential evolution algorithm called MODivDE for feature selection in high-dimensional MLC tasks. The MODivDE algorithm involves multiple improvements and innovations in quality indicator-based selection, logic-based search strategy, and diversity-based archive update. The results demonstrate the exceptional performance of the MODivDE algorithm across a diverse range of high-dimensional datasets, surpassing recently introduced many-objective and conventional multi-label feature selection algorithms. The advancements in MODivDE collectively contribute to significantly improved accuracy, efficiency, and interpretability compared to state-of-the-art methods in the realm of multi-label feature selection.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 2","pages":"1226-1237"},"PeriodicalIF":5.3,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143716518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-27DOI: 10.1109/TETCI.2025.3526285
Pengfei Yu;Jingjing Gu;Dechang Pi;Qiang Zhou;Qiuhong Wang
This paper explores an implicit Aspect Category Sentiment Analysis task, which aims to determine the sentiment polarities of given aspect categories in social reviews. Currently, most researchers focus more on explicit aspect and rarely work on implicit aspect. Meanwhile, due to the semantic complexity of natural language, it is difficult for existing methods to retrieve such implicit semantics in sentences. To this end, we propose a novel framework, the Aspect-aware Graph Interaction Attention Network (AGIAN), which concentrates on aspect-related information implicitly in sentences and identifies its corresponding sentiment polarity. Specifically, first, we introduce an aspect-aware graph to represent potential associations between the implicit aspect category and the sentence. Then, we utilize two types of graph neural networks to extract rich relational semantics. Finally, we design a graph interaction mechanism to integrate sentiment features specific to the aspect category for sentiment classification. We evaluate the performance of the proposed framework on six publicly available benchmark datasets. Extensive experiments demonstrate that, compared to some competitive baseline methods, AGIAN can effectively improve accuracy and achieve state-of-the-art performance on the F1-score.
{"title":"Aspect-Aware Graph Interaction Attention Network for Aspect Category Sentiment Analysis","authors":"Pengfei Yu;Jingjing Gu;Dechang Pi;Qiang Zhou;Qiuhong Wang","doi":"10.1109/TETCI.2025.3526285","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3526285","url":null,"abstract":"This paper explores an implicit Aspect Category Sentiment Analysis task, which aims to determine the sentiment polarities of given aspect categories in social reviews. Currently, most researchers focus more on explicit aspect and rarely work on implicit aspect. Meanwhile, due to the semantic complexity of natural language, it is difficult for existing methods to retrieve such implicit semantics in sentences. To this end, we propose a novel framework, the Aspect-aware Graph Interaction Attention Network (AGIAN), which concentrates on aspect-related information implicitly in sentences and identifies its corresponding sentiment polarity. Specifically, first, we introduce an aspect-aware graph to represent potential associations between the implicit aspect category and the sentence. Then, we utilize two types of graph neural networks to extract rich relational semantics. Finally, we design a graph interaction mechanism to integrate sentiment features specific to the aspect category for sentiment classification. We evaluate the performance of the proposed framework on six publicly available benchmark datasets. Extensive experiments demonstrate that, compared to some competitive baseline methods, AGIAN can effectively improve accuracy and achieve state-of-the-art performance on the F1-score.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 4","pages":"3122-3135"},"PeriodicalIF":5.3,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144687778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-23DOI: 10.1109/TETCI.2025.3529608
{"title":"IEEE Computational Intelligence Society Information","authors":"","doi":"10.1109/TETCI.2025.3529608","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3529608","url":null,"abstract":"","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 1","pages":"C3-C3"},"PeriodicalIF":5.3,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10850899","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143360995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-23DOI: 10.1109/TETCI.2025.3529610
{"title":"IEEE Transactions on Emerging Topics in Computational Intelligence Information for Authors","authors":"","doi":"10.1109/TETCI.2025.3529610","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3529610","url":null,"abstract":"","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 1","pages":"C4-C4"},"PeriodicalIF":5.3,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10850888","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143361033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}