Underwater acoustic signal classification plays a pivotal role in maritime applications, requiring accurately identifying various acoustic sources in complex underwater environments. While deep learning has substantially enhanced performance in this domain, its success is often contingent on hand-crafted input features and intricate network architectures. The paper presents a novel method for classifying underwater acoustic signals by integrating the Wavelet Scattering Transform (WST) with Attention-augmented Convolutional Neural Networks (CNNs). The WST, based on wavelet analysis, effectively extracts multiscale features while retaining crucial time-frequency information, offering translation invariance and reducing the dependency on large training datasets. Furthermore, incorporating ResNet-18 with an attention mechanism improves extracted features by capturing richer semantic information, even from limited training data. The method was evaluated on the ShipsEar dataset, utilizing only 8.5% of samples for training, 1.5% for validation, and the remaining 90% for testing. Our approach achieved a classification accuracy of 0.93, surpassing the traditional Mel spectrogram with ResNet-18 by 9.8%. These results underscore the effectiveness of the proposed method in handling challenging underwater acoustic environments with limited training data.