Super-resolution (SR) imaging is a key task in computer vision, and recent progress has been driven by deep learning. However, manually designed SR networks often suffer from poor generalization, inefficiency, and long development cycles. Neural Architecture Search (NAS) offers an automated paradigm to overcome these limitations. However, its application to SR remains in a nascent stage, presenting significant research gaps such as prohibitive computational costs and the limited generalization of searched architectures. This review summarizes advances of NAS in SR, analyzing its essential components search space, search strategy, and performance evaluation and discussing applications in single image SR, remote sensing, and video SR. Studies show that NAS-based models can achieve competitive or superior performance with lower computational cost compared to handcrafted designs. Specifically, we emphasize the following contributions: (1) a comprehensive analysis of NAS components tailored to SR tasks; (2) a review of NAS applications across various SR domains with demonstrated improvements in performance and efficiency; and (3) identification of these unresolved challenges to outline actionable future directions, including reducing search costs, enhancing cross-domain robustness of lightweight models, and expanding NAS applications in SR-related tasks. This work aims to provide theoretical and methodological insights to support research and practical deployment of NAS in SR imaging.