Deep learning has gained popularity recently, and privacy concerns have increased simultaneously. Adversaries gain unauthorized access to the private training data and model parameters through model inversion attacks and membership inference attacks. To address these problems, researchers proposed several defense mechanisms based on a decisive privacy criterion - Local Differential Privacy (LDP). Although the LDP-based deep learning model preserves data privacy well, its strict privacy criterion sometimes affects accuracy. It is a non-trivial task to intelligently add noise that satisfies LDP and minimizes its impact on learning results. This paper proposes a novel LDP-based deep learning method named AMOUE with a novel encoding technique. Because input data has different proportions of 1s and 0s, adding fixed noise to 1s and 0s may result in unnecessary data utility loss. The proposed encoding method dynamically adjusts the noise added on 1s and 0s according to the input data distribution. Theoretical analysis demonstrates that AMOUE has a lower error expectation and variance. Experiments on real-world datasets show that AMOUE outperforms other LDP-based mechanisms in deep learning classification accuracy.