This paper considers a modified Leslie–Gower prey–predator reaction–diffusion model introducing harvesting of both species. Both the temporal and spatiotemporal dynamics of the model have been examined. We have found the stability regions and drawn bifurcation diagrams to determine the harvesting effect on the model, revealing that the harvesting has a stabilizing effect. Local bifurcations, such as transcritical and Hopf bifurcations, appear in the temporal system. For the spatiotemporal model, Turing instability conditions have been determined. The amplitude equation for the critical modes has been derived using multiple time scale analyses by taking the harvesting effort as the bifurcating parameter. Also, we have verified the theoretical results by plotting several kinds of stationary patterns, including stripes, spots, and a mix of stripes and spots. This study’s critical observation is that as harvesting effort rises, the patterns steadily turn into spots, i.e., harvesting influences pattern creation strongly. This fosters a dynamic equilibrium, allowing competitors to maintain distance, optimize resource use and survive.
Beverton–Holt Ricker competition model is a planar difference system that describes intraspecific competition among individuals and interspecific competition. Known works investigated the stability of equilibria in some cases, showed the existence of stable 2-periodic points when there are no interior equilibria, and found numerically an attractor with riddled basin of attraction for some appropriate parameters. In this paper, we prove the existence of the global attractor, and give a complete description on qualitative properties and bifurcations of all equilibria except for some cases of high degeneracy. Moreover, we obtain different kinds of 1-dimensional or 2-dimensional structures of the global attractor, which were not considered in the known work.
This paper investigates the issue of multistability in state-dependent switched neural networks (SSNNs) with Mexican-hat-type activation functions (AFs). It establishes the coexistence and stability of multiple equilibrium points (EPs). Initially, the state space is partitioned based on the geometric characteristics of the Mexican-hat-type AF, enabling to determine the positions of the EPs. Secondly, the coexistence of EPs for -neurons SSNNs under specific sufficient conditions is proved with the Brouwer’s fixed-point theorem. Next, by using diagonally dominant matrix theory and Gershgorin circle theorem, it is proven that there are asymptotically stable EPs under some conditions, where and are nonnegative integers satisfying . Therefore, we can obtain that SSNNs can have larger storage capacity by selecting the appropriate parameters . Finally, the correctness of the results in this paper is verified through two numerical examples.
Taking the nonlinear Schrödinger equation (NLSE) as an example, we provide from a mathematical viewpoint, rigorous evidence that numerical noise of a chaotic system as tiny artificial stochastic disturbances can increase exponentially to a macro-level. As a result, numerical simulations given by traditional algorithms in double precision may rapidly become badly polluted leading to huge deviations from the ‘true’ solution not only in trajectory but also, sometimes, even in statistics and/or some qualitative properties. Small physical disturbances in time and space are unavoidable in practice, which are often much larger than artificial numerical noise. So, from a physical viewpoint, it is wrong to neglect small spatio-temporal disturbances of a chaotic system: chaos should not be described by deterministic equations.
Predictions of complex systems ranging from natural language processing to weather forecasting have benefited from advances in Recurrent Neural Networks (RNNs). RNNs are typically trained using techniques like Backpropagation Through Time (BPTT) to minimize one-step-ahead prediction loss. During testing, RNNs often operate in an auto-regressive mode, with the output of the network fed back into its input. However, this process can eventually result in exposure bias since the network has been trained to process ”ground-truth” data rather than its own predictions. This inconsistency causes errors that compound over time, indicating that the distribution of data used for evaluating losses differs from the actual operating conditions encountered by the model during training. Inspired by the solution to this challenge in language processing networks we propose the Scheduled Autoregressive Truncated Backpropagation Through Time (BPTT-SA) algorithm for predicting complex dynamical systems using RNNs. We find that BPTT-SA effectively reduces iterative error propagation in Convolutional and Convolutional Autoencoder RNNs and demonstrates its capabilities in the long-term prediction of high-dimensional fluid flows.