Considering the complexities of the modern maritime operational environment and aiming for effective safe navigation and communication maintenance, research into the collaborative trajectory tracking problem of unmanned surface vehicles (USVs) and unmanned aerial vehicles (UAVs) clusters during patrol and target tracking missions holds paramount significance. This paper proposes a multi-agent deep reinforcement learning (MADRL) approach, specifically the action-constrained multi-agent deep deterministic policy gradient (MADDPG), to efficiently solve the collaborative maritime-aerial distributed information fusion-based trajectory tracking problem. The proposed approach incorporates a constraint model based on the characteristics of maritime-aerial distributed information fusion mode and two designed reward functions—one global for target tracking and one local for cross-domain collaborative unmanned clusters. Simulation experiments under three different mission scenarios have been conducted, and results demonstrate that the proposed approach possesses excellent applicability to trajectory tracking tasks in collaborative maritime-aerial settings, exhibiting strong convergence and robustness in mobile target tracking. In a complex three-dimensional simulation environment, the improved algorithm demonstrated an 11.04% reduction in training time for convergence and an 8.03% increase in reward values compared to the original algorithm. This indicates that the introduction of attention mechanisms and the design of reward functions enable the algorithm to learn optimal strategies more quickly and effectively.