Time series forecasting is widely applied in fields such as energy and network security. Various prediction models based on Transformer and MLP architectures have been proposed. However, their performance may decline to varying degrees when applied to real-world sequences with significant non-stationarity. Traditional approaches generally adopt either stabilization or a combination of stabilization and non-stationarity compensation for prediction tasks. However, non-stationarity is a crucial attribute of time series; the former approach tends to eliminate useful non-stationary patterns, while the latter may inadequately capture non-stationary information. Therefore, we propose DiffMixer, which analyzes and predicts different frequencies in non-stationary time series. We use Variational Mode Decomposition (VMD) to obtain multiple frequency components of the sequence, Multi-scale Decomposition (MsD) to optimize the decomposition of downsampled sequences, and Improved Star Aggregate-Redistribute (iSTAR) to capture interdependencies between different frequency components. Additionally, we employ the Frequency domain Processing Block (FPB) to capture global features of different frequency components in the frequency domain, and Dual Dimension Fusion (DuDF) to fuse different frequency components in two dimensions, enhancing the predictive fit for various frequencies. Compared to previous state-of-the-art methods, DiffMixer reduces the Mean Squared Error (MSE), Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), and Symmetric Mean Absolute Percentage Error (SMAPE) by 24.5%, 12.3%, 13.5%, and 6.1%, respectively.
Incomplete multi-view clustering (IMVC) has become an area of increasing focus due to the frequent occurrence of missing views in real-world multi-view datasets. Traditional methods often address this by attempting to recover the missing views before clustering. However, these methods face two main limitations: (1) inadequate modeling of cross-view consistency, which weakens the relationships between views, especially with a high missing rate, and (2) limited capacity to generate realistic and diverse missing views, leading to suboptimal clustering results. To tackle these issues, we propose a novel framework, Joint Generative Adversarial Network and Alignment Adversarial (JGA-IMVC). Our framework leverages adversarial learning to simultaneously generate missing views and enforce consistency alignment across views, ensuring effective reconstruction of incomplete data while preserving underlying structural relationships. Extensive experiments on benchmark datasets with varying missing rates demonstrate that JGA-IMVC consistently outperforms current state-of-the-art methods. The model achieves improvements of 3 % to 5 % in key clustering metrics such as Accuracy, Normalized Mutual Information (NMI), and Adjusted Rand Index (ARI). JGA-IMVC excels under high missing conditions, confirming its robustness and generalization capabilities, providing a practical solution for incomplete multi-view clustering scenarios.
Brain-inspired neural networks, drawing insights from biological neural systems, have emerged as a promising paradigm for temporal information processing due to their inherent neural dynamics. Spiking Neural Networks (SNNs) have gained extensive attention among existing brain-inspired neural models. However, they often struggle with capturing multi-timescale temporal features due to the static parameters across time steps and the low-precision spike activities. To this end, we propose a dynamic SNN with enhanced dendritic heterogeneity to enhance the multi-timescale feature extraction capability. We design a Leaky Integrate Modulation neuron model with Dendritic Heterogeneity (DH-LIM) that replaces traditional spike activities with a continuous modulation mechanism for preserving the nonlinear behaviors while enhancing the feature expression capability. We also introduce an Adaptive Dendritic Plasticity (ADP) mechanism that dynamically adjusts dendritic timing factors based on the frequency domain information of input signals, enabling the model to capture both rapid- and slow-changing temporal patterns. Extensive experiments on multiple datasets with rich temporal features demonstrate that our proposed method achieves excellent performance in processing complex temporal signals. These optimizations provide fresh solutions for optimizing the multi-timescale feature extraction capability of SNNs, showcasing its broad application potential.
Recent advancements in visual speech recognition (VSR) have promoted progress in lip-to-speech synthesis, where pre-trained VSR models enhance the intelligibility of synthesized speech by providing valuable semantic information. The success achieved by cascade frameworks, which combine pseudo-VSR with pseudo-text-to-speech (TTS) or implicitly utilize the transcribed text, highlights the benefits of leveraging VSR models. However, these methods typically rely on mel-spectrograms as an intermediate representation, which may introduce a key bottleneck: the domain gap between synthetic mel-spectrograms, generated from inherently error-prone lip-to-speech mappings, and real mel-spectrograms used to train vocoders. This mismatch inevitably degrades synthesis quality. To bridge this gap, we propose Natural Lip-to-Speech (NaturalL2S), an end-to-end framework that jointly trains the vocoder with the acoustic inductive priors. Specifically, our architecture introduces a fundamental frequency (F0) predictor to explicitly model prosodic variations, where the predicted F0 contour drives a differentiable digital signal processing (DDSP) synthesizer to provide acoustic priors for subsequent refinement. Notably, the proposed system achieves satisfactory performance on speaker similarity without requiring explicit speaker embeddings. Both objective metrics and subjective listening tests demonstrate that NaturalL2S significantly enhances synthesized speech quality compared to existing state-of-the-art methods. Audio samples are available on our demonstration page: https://yifan-liang.github.io/NaturalL2S/.

