Imbalanced multi-label learning (IMLL) aims to learn a multi-label classifier from instances with imbalanced label distribution. Most existing IMLL models either use resampling techniques for preprocessing or transform multi-label learning problems into single-label problems, and traditional imbalanced learning methods are applied. The resampling methods may distort data distribution, whereas the latter overlook dependency and correlation between labels. To address those challenges, this paper proposes an approach to imbalanced multi-label learning, nominated as MWLDAIML. In the proposed model, a label enhancement matrix is designed according to the imbalance rates of positive and negative instances to enlarge the influence of instances in minority classes. A linear mapping is learned to function as a classifier from the feature space to the enhanced label space, and simultaneously acts as a projection from a high-dimensional space to a low-dimensional space, ensuring a large divergence between inter-class instances and a tight distribution of intra-class instances via introducing a weighted linear discriminant analysis (WLDA). Moreover, the metric learning is embedded into WLDA to discern intra-class instances and inter-class instances to capture a more complex nonlinear relationship between instances and their multiple labels. Additionally, the graph Laplacian regularization is imposed to ensure that predicted labels inherit the topological structure of instances in the feature space. An efficient algorithm for implementing MWLDAIML is developed to implement the proposed model, and extensive experiments on real-world benchmark datasets demonstrate that the proposed model outperforms existing methods for imbalanced multi-label classification.