Achieving a balance between accuracy and efficiency in target detection applications is an important research topic. To detect abnormal targets on power transmission lines at the power edge, this paper proposes an effective method for reducing the data bit width of the network for floating-point quantization. By performing exponent prealignment and mantissa shifting operations, this method avoids the frequent alignment operations of standard floating-point data, thereby further reducing the exponent and mantissa bit width input into the training process. This enables training low-data-bit width models with low hardware-resource consumption while maintaining accuracy. Experimental tests were conducted on a dataset of real-world images of abnormal targets on transmission lines. The results indicate that while maintaining accuracy at a basic level, the proposed method can significantly reduce the data bit width compared with single-precision data. This suggests that the proposed method has a marked ability to enhance the real-time detection of abnormal targets in transmission circuits. Furthermore, a qualitative analysis indicated that the proposed quantization method is particularly suitable for hardware architectures that integrate storage and computation and exhibit good transferability.