Combining acoustic emission techniques and deep learning models for online quality monitoring of laser shock peening has the application value of real-time, high-accuracy, and adaptability. However, general models may have poor generalization ability and feature interpretability in acoustic emission (high peak, fast attenuation, and long plateau) monitoring tasks. To address this issue, this paper customizes an interpretable model called the contour encoding network tailored to the adaptive cepstrum characteristics of acoustic emission. Specifically, we first analyze the information propagation manner of the acoustic emission adaptive cepstrum within the general model. The paper focuses on extracting valuable discriminative information from the edge contour features of the adaptive cepstrum using learnable high-pass filtering operators. Furthermore, to make the model pay more attention to specific sensitive regions of the input data, this paper proposes a customized attention module. It is non-parameterized, thus having an interpretable computational process. This proposed network architecture can maximize recognition performance, simplify model structure, and improve generalization ability. The effectiveness and reliability of the proposed method are validated on experimental data of laser shock peening. The experimental results demonstrate that the proposed method achieves superior recognition accuracy compared to other advanced networks and exhibits desirable interpretability.