This paper addresses the fixed-time optimal bipartite containment control problem for nonlinear multi-agent systems under multiple simultaneous faults — including concurrent actuator and sensor faults — and input saturation. A key contribution is a unified disturbance-observer–based reinforcement learning framework that integrates fault tolerance with optimal control objectives. By designing a novel performance index function, the traditional containment control problem is reformulated as an optimal control problem, enabling an explicit trade-off between control accuracy and energy consumption. To solve the corresponding Hamilton–Jacobi–Bellman equation without relying on accurate system dynamics, a neural reinforcement learning algorithm with an identifier–critic–actor architecture is developed. A disturbance observer is incorporated to actively estimate and compensate for external disturbances, while an auxiliary system is introduced to alleviate input saturation effects. The resulting fixed-time optimal controller ensures that all bipartite containment errors converge to a small neighborhood of the origin within a fixed time independent of initial conditions, while maintaining uniform boundedness of all closed-loop signals. Simulation results validate the effectiveness and superiority of the proposed method in achieving simultaneous fault tolerance, disturbance rejection, and optimal performance under saturated actuation conditions.
扫码关注我们
求助内容:
应助结果提醒方式:
