Time series prediction is one of the most important applications of Fuzzy Cognitive Maps (FCMs). In general, the state of FCMs in forecasting depends only on the state of the previous moment, but in fact it is also affected by the past state. Hence Higher-Order Fuzzy Cognitive Maps (HFCMs) are proposed based on FCMs considering historical information and have been widely used for time series forecasting. However, using HFCMs to deal with sparse and large-scale multivariate time series are still a challenge, while large-scale data makes it difficult to determine the causal relationship between nodes because of the increased number of nodes, so it is necessary to explore the relationship between nodes to guide large-scale HFCMs learning. Therefore, a sparse large-scale HFCMs learning algorithm guided by Spearman correlation coefficient, called SG-HFCM, is proposed in the paper. The SG-HFCM model is specified as follows: First, the solving of HFCMs model is transform into a regression model and an adaptive loss function is utilized to enhance the robustness of the model. Second, -norm is used to improve the sparsity of the weight matrix. Third, in order to more accurately characterize the correlation relationship between variables, the Spearman correlation coefficients is added as a regular term to guide the learning of weight matrices. When calculating the Spearman correlation coefficient, through splitting domain interval method, we can better understand the characteristics of the data, and get better correlation in different small intervals, and more accurately characterize the relationship between the variables in order to guide the weight matrix. In addition, the Alternating Direction Multiplication Method and quadratic programming method are used to solve the algorithms to get better solutions for the SG-HFCM, where the quadratic programming can well ensure that the range of the weights and obtaining the optimal solution. Finally, by comparing with five algorithms, the SG-HFCM model showed an average improvement of 11.93% in prediction accuracy for GRNs, indicating that our proposed model has good predictive performance.
Aspect-based sentiment analysis aims to analyze and understand people’s opinions from different aspects. Some comments do not contain explicit opinion words but still convey a clear human-perceived emotional orientation, which is known as implicit sentiment. Most previous research relies on contextual information from a text for implicit aspect-based sentiment analysis. However, little work has integrated external knowledge with contextual information. This paper proposes an implicit aspect-based sentiment analysis model combining supervised contrastive learning with knowledge-enhanced fine-tuning on BERT (BERT-SCL+KEFT). In the pre-training phase, the model utilizes supervised contrastive learning (SCL) on large-scale sentiment-annotated corpora to acquire sentiment knowledge. In the fine-tuning phase, the model uses a knowledge-enhanced fine-tuning (KEFT) method to capture explicit and implicit aspect-based sentiments. Specifically, the model utilizes knowledge embedding to embed external general knowledge information into textual entities by using knowledge graphs, enriching textual information. Finally, the model combines external knowledge and contextual features to predict the implicit sentiment in a text. The experimental results demonstrate that the proposed BERT-SCL+KEFT model outperforms other baselines on the general implicit sentiment analysis and implicit aspect-based sentiment analysis tasks. In addition, ablation experimental results show that the proposed BERT-SCL+KEFT model without the knowledge embedding module or supervised contrastive learning module significantly decreases performance, indicating the importance of these modules. All experiments validate that the proposed BERT-SCL+KEFT model effectively achieves implicit aspect-based sentiment classification.
Large-scale multiple criteria group decision-making (MCGDM) is prevalent in diverse decision-making scenarios, involving numerous decision makers (DMs), the set of alternatives and criteria, and continuous temporal cycles. Opinions from DMs dynamically evolve through iterative interaction, leading to dynamic opinion evolutions. However, traditional MCGDM methodology usually establish the opinion formation on a static time point throughout information aggregation, which will lead to information distortion. This study develops a novel large-scale MCGDM method with information emendation based on an unsupervised opinion dynamics (UOD) model, combining with the intuitionistic fuzzy set (IFS) and the technique for order preference by similarity to an ideal solution (TOPSIS). The IFS is utilized to quantify opinions since it can effectively achieve a tradeoff between information retention and convenience of evaluation. Simultaneously, in the proposed UOD model, the weight updating mechanism is further considered to improve the interaction adequacy, and the unsupervised mechanism for interaction threshold helps to decrease the influences of subjectivity from DMs. Moreover, numerical simulations validate the UOD model’s feasibility. Finally, a school site selection problem is carried out to elaborate the effectiveness of the proposed method. This study will provide a methodological reference for solving large-scale MCGDM problems, facilitating rapid convergence of opinions within large-scale groups, and enrich the research on opinion dynamics in the field of decision-making.
Time series forecasting is intricately tied to production and life, garnering widespread attention over an extended period. Enhancing the performance of long-term multivariate time series forecasting (MTSF) poses a highly challenging task, as it requires mining complicated and obscure temporal patterns in many aspects. For this reason, this paper proposes a long-term forecasting model based on multi-domain fusion (VTNet) to adaptively capture and refine multi-scale intra- and inter-variate dependencies. In contrast to previous techniques, we devise a dual-stream learning architecture. Firstly, the fast Fourier Transform (FFT) is adopted to extract frequency domain information. The original sequences are then transformed into 2D visual features in the temporal-frequency domain, and a 2D-TBlock is designed for multi-scale dynamic learning. Secondly, a combination of convolution and recurrent networks continues to explore the local temporal features and preserve the global trend. Finally, multi-modal circulant fusion is applied to achieve a more comprehensive and enriched feature fusion representation, further promoting overall performance. Extensive experiments are conducted on 9 public benchmark datasets and the real-world irrigation water level to showcase VTNet’s promoted performance and generalization. Moreover, VTNet yields 46.93% and 25.36% relative improvements for water level forecasting, revealing its potential application value in water-saving planning and extreme event early warning.