The deep neural network with step function activation (0/1 DNNs) is a fundamental composite model in deep learning which has high efficiency and robustness to outliers. However, due to the discontinuity and lacking subgradient information of the 0/1 DNNs model, prior researches are largely focused on designing continuous functions to approximate the step activation and developing continuous optimization methods. In this paper, by introducing two sets of network node variables into the 0/1 DNNs and by exploring the composite structure of the resulted model, the 0/1 DNNs is decomposed into a unary optimization model associated with the step function and three derivational optimization subproblems associated with the other variables. For the unary optimization model and two derivational optimization subproblems, we present a closed form solution, and for the third derivational optimization subproblem, we propose an efficient proximal method. Based on this, a globally convergent step function based recursion method for the 0/1 DNNs is developed. The efficiency and performance of the proposed algorithm are validated via theoretical analysis as well as some illustrative numerical examples on classifying MNIST, FashionMNIST and Cifar10 datasets.