Calculating disparities through stereo matching is an important step in a variety of machine vision tasks used for robotics and similar applications. The use of deep neural networks for stereo matching requires the construction of a matching cost volume. However, the occluded, non-textured, and reflective regions are ill-posed, which cannot be directly matched. In previous studies, a direct calculation has typically been used to measure matching costs for single-scale feature maps, which makes it difficult to predict disparity for ill-posed regions. Thus, we propose a learnable multi-scale matching cost calculation method (LMNet) to improve the accuracy of stereo matching. This learned matching cost can reasonably estimate the disparity of the regions that are conventionally difficult to match. Multi-level 3D dilation convolutions for multi-scale features are introduced during constructing cost volumes because the receptive field of the convolution kernels is limited. The experimental results show that the proposed method achieves significant improvement in ill-posed regions. Compared with the classical architecture GwcNet, End-Point-Error (EPE) of the proposed method on the Scene Flow dataset is reduced by 16.46%. The number of parameters and required calculations are also reduced by 8.71% and 20.05%, respectively. The proposed model code and pre-training parameters are available at: https://github.com/jt-liu/LMNet.