High-fidelity engineering simulations can impose an enormous computational burden, hindering their application in design processes or other scenarios where time or computational resources can be limited. An effective up-sampling method for generating high-resolution data can help reduce the computational resources and time required for these simulations. However, conventional up-sampling methods encounter challenges when estimating results based on low-resolution meshes due to the often non-linear behavior of discretization error induced by the coarse mesh. In this study, we present the Taylor Expansion Error Correction Network (TEECNet), a neural network designed to efficiently super-resolve partial differential equations (PDEs) solutions via graph representations. We use a neural network to learn high-dimensional non-linear mappings between low- and high-fidelity solution spaces to approximate the effects of discretization error. The learned mapping is then applied to the low-fidelity solution to obtain an error correction model. Building upon the notion that discretization error can be expressed as a Taylor series expansion based on the mesh size, we directly encode approximations of the numerical error in the network design. This novel approach is capable of correcting point-wise evaluations and emulating physical laws in infinite-dimensional solution spaces. Additionally, results from computational experiments verify that the proposed model exhibits the ability to generalize across diverse physics problems, including heat transfer, Burgers' equation, and cylinder wake flow, achieving over 96% accuracy by mean squared error and a 42.76% reduction in computation cost compared to popular operator regression methods.