Synthetic Aperture Radar (SAR) image characteristics are highly susceptible to variations in the radar operation condition. Meanwhile, acquiring large amounts of SAR data under various imaging conditions is still a challenge in real application scenarios. Such sensitivity and scarcity bring an inadequately robust feature representation learning to recent data-hungry deep learning-based SAR Automatic Target Recognition (ATR) approaches. Considering the fact that physics-based electromagnetic simulated images could reproduce the image characteristics difference under various imaging conditions, we propose a simulation-aided domain adaptation technique to improve the generalization ability without extra measured SAR data. To be specific, We first build a surrogate feature alignment task using only simulated data based on a domain adaptation network. To mitigate the distribution shift problem between simulated and real data, we propose a category-level weighting mechanism based on SAR-SIFT similarity. This approach enhances surrogate feature alignment ability by re-weighting the simulated samples’ features in a category-level manner according to their similarities to the measured data. In addition, a meta-adaption optimization is designed to further reduce the sensitivity to the operation condition variation. We consider the recognition of the targets in simulated data across imaging conditions as an individual meta-task and adopt the multi-gradient descent algorithm to adapt the feature to different operation condition domains. We conduct experiments on two military vehicle datasets, MSTAR and SAMPLE-M with the aid of a simulated civilian vehicle dataset, SarSIM. The proposed method achieves state-of-the-art performance in extended operation conditions with 88.58% and 86.15% accuracy for variations in depression angle and resolution, outperforming our previous simulation-aided domain adaptation work TDDA. The code is available at https://github.com/ShShann/SA2FA-MAO.