Brain tumors exhibit significant variability in shape, size, and location, making it difficult to achieve consistent and accurate classification. It requires advanced algorithms for handling diverse tumor presentations. To solve this issue, we propose a Dual-Enhanced Features Scheme (DEFS) with a Swin-Transformer model based on the EfficientNetV2S to improve the classification and reuse parameters. In DEFS, the dense-block with dilation enables to uncovering of hidden details and spatial relationships across varying scales in the model which are typically obscured by traditional convolutional-layers. This module is particularly crucial in medical imaging, where tumors and anomalies can present in various sizes and shapes. Further, the dual-attention mechanism in the enhanced Featured scheme enhances the explainability and interpretability of the model by using spatial and channel-wise information. Additionally, the Swin-Transformer-block improves the model’s capabilities to capture global patterns in brain-tumor images, which is highly advantageous in medical-imaging where the location and extent of abnormalities, such as tumors, can vary significantly. To strengthen the proposed DEF-SwinE2NET, we used EfficientNetV2S as a baseline-model due to its effectiveness and accurate classification compared to its predecessors. We evaluated DEFSwinE2NET using three benchmark datasets: two were sourced from Kaggle and one from a Figshare repositories. Several preprocessing-steps were applied to enhance the MRI-images before training including image cropping, median-filter noise-reduction, contrast-limited adaptive histogram equalization (CLAHE) for local-contrast enhancement, Laplacian-edge enhancement to highlight critical features, and data augmentation to improve model robustness and generalization. The DEF-SwinE2NET model achieves remarkable results with an accuracy of 99.43 %, a sensitivity of 99.39 %, and an F1-score of 99.41 %.